Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Language, Speech: Add generated code samples. #9153

Merged
merged 9 commits into from
Sep 4, 2019
Merged

Language, Speech: Add generated code samples. #9153

merged 9 commits into from
Sep 4, 2019

Conversation

beccasaurus
Copy link
Contributor

@beccasaurus beccasaurus commented Aug 30, 2019

Updated!

This Pull Request adds GAPIC-generated code samples for Natural Language and Speech-to-Text.

  • Enable sample generation in synth.py for Language and Speech libraries.
  • Run synthtool and add generated samples/.
  • Update "Tests" section of CONTRIBUTING.rst for running generated sample tests.

    Note: I couldn't get the rst rendering locally, not sure if I'm using the right formatting

👓 This still needs to be configured to run tests via Kokoro.

  • Update Tests to run the tests (updated noxfile.py for each library for sample tests)

Right now => you can only run the tests for these samples locally (because Kokoro isn't configured).

☑️ Test output

Because Kokoro isn't automatically running these tests, the output from running the Python generated sample tests locally is included here.

These tests were run with --verbosity detailed which shows the full sample command line invocations and STDOUT.

📖 Natural Language
RUNNING: Test environment: ""
  RUNNING: Test suite: "Analyzing Syntax [code sample tests]"
    PASSED: Test case: "language_syntax_text - Analyzing the syntax of a text string (default value)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/language_syntax_text.py 
      | Token text: This
      | Location of this token in overall document: 0
      | Part of Speech tag: DET
      | Voice: VOICE_UNKNOWN
      | Tense: TENSE_UNKNOWN
      | Lemma: This
      | Head token index: 1
      | Label: NSUBJ
      | Token text: is
      | Location of this token in overall document: 5
      | Part of Speech tag: VERB
      | Voice: VOICE_UNKNOWN
      | Tense: PRESENT
      | Lemma: be
      | Head token index: 1
      | Label: ROOT
      | Token text: a
      | Location of this token in overall document: 8
      | Part of Speech tag: DET
      | Voice: VOICE_UNKNOWN
      | Tense: TENSE_UNKNOWN
      | Lemma: a
      | Head token index: 4
      | Label: DET
      | Token text: short
      | Location of this token in overall document: 10
      | Part of Speech tag: ADJ
      | Voice: VOICE_UNKNOWN
      | Tense: TENSE_UNKNOWN
      | Lemma: short
      | Head token index: 4
      | Label: AMOD
      | Token text: sentence
      | Location of this token in overall document: 16
      | Part of Speech tag: NOUN
      | Voice: VOICE_UNKNOWN
      | Tense: TENSE_UNKNOWN
      | Lemma: sentence
      | Head token index: 1
      | Label: ATTR
      | Token text: .
      | Location of this token in overall document: 24
      | Part of Speech tag: PUNCT
      | Voice: VOICE_UNKNOWN
      | Tense: TENSE_UNKNOWN
      | Lemma: .
      | Head token index: 1
      | Label: P
      | Language of the text: en
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "language_syntax_text - Analyzing the syntax of a text string (*custom value*)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/language_syntax_text.py --text_content="Alice runs. Bob ran."
      | Token text: Alice
      | Location of this token in overall document: 0
      | Part of Speech tag: NOUN
      | Voice: VOICE_UNKNOWN
      | Tense: TENSE_UNKNOWN
      | Lemma: Alice
      | Head token index: 1
      | Label: NSUBJ
      | Token text: runs
      | Location of this token in overall document: 6
      | Part of Speech tag: VERB
      | Voice: VOICE_UNKNOWN
      | Tense: PRESENT
      | Lemma: run
      | Head token index: 1
      | Label: ROOT
      | Token text: .
      | Location of this token in overall document: 10
      | Part of Speech tag: PUNCT
      | Voice: VOICE_UNKNOWN
      | Tense: TENSE_UNKNOWN
      | Lemma: .
      | Head token index: 1
      | Label: P
      | Token text: Bob
      | Location of this token in overall document: 12
      | Part of Speech tag: NOUN
      | Voice: VOICE_UNKNOWN
      | Tense: TENSE_UNKNOWN
      | Lemma: Bob
      | Head token index: 4
      | Label: NSUBJ
      | Token text: ran
      | Location of this token in overall document: 16
      | Part of Speech tag: VERB
      | Voice: VOICE_UNKNOWN
      | Tense: PAST
      | Lemma: run
      | Head token index: 4
      | Label: ROOT
      | Token text: .
      | Location of this token in overall document: 19
      | Part of Speech tag: PUNCT
      | Voice: VOICE_UNKNOWN
      | Tense: TENSE_UNKNOWN
      | Lemma: .
      | Head token index: 4
      | Label: P
      | Language of the text: en
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "language_syntax_gcs - Analyzing the syntax of text file in GCS (default value)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/language_syntax_gcs.py 
      | Token text: This
      | Location of this token in overall document: 0
      | Part of Speech tag: DET
      | Voice: VOICE_UNKNOWN
      | Tense: TENSE_UNKNOWN
      | Lemma: This
      | Head token index: 1
      | Label: NSUBJ
      | Token text: is
      | Location of this token in overall document: 5
      | Part of Speech tag: VERB
      | Voice: VOICE_UNKNOWN
      | Tense: PRESENT
      | Lemma: be
      | Head token index: 1
      | Label: ROOT
      | Token text: a
      | Location of this token in overall document: 8
      | Part of Speech tag: DET
      | Voice: VOICE_UNKNOWN
      | Tense: TENSE_UNKNOWN
      | Lemma: a
      | Head token index: 4
      | Label: DET
      | Token text: short
      | Location of this token in overall document: 10
      | Part of Speech tag: ADJ
      | Voice: VOICE_UNKNOWN
      | Tense: TENSE_UNKNOWN
      | Lemma: short
      | Head token index: 4
      | Label: AMOD
      | Token text: sentence
      | Location of this token in overall document: 16
      | Part of Speech tag: NOUN
      | Voice: VOICE_UNKNOWN
      | Tense: TENSE_UNKNOWN
      | Lemma: sentence
      | Head token index: 1
      | Label: ATTR
      | Token text: .
      | Location of this token in overall document: 24
      | Part of Speech tag: PUNCT
      | Voice: VOICE_UNKNOWN
      | Tense: TENSE_UNKNOWN
      | Lemma: .
      | Head token index: 1
      | Label: P
      | Language of the text: en
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "language_syntax_gcs - Analyzing the syntax of text file in GCS (*custom value*)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/language_syntax_gcs.py --gcs_content_uri="gs://cloud-samples-data/language/hello.txt"
      | Token text: Hello
      | Location of this token in overall document: 0
      | Part of Speech tag: X
      | Voice: VOICE_UNKNOWN
      | Tense: TENSE_UNKNOWN
      | Lemma: Hello
      | Head token index: 2
      | Label: DISCOURSE
      | Token text: ,
      | Location of this token in overall document: 5
      | Part of Speech tag: PUNCT
      | Voice: VOICE_UNKNOWN
      | Tense: TENSE_UNKNOWN
      | Lemma: ,
      | Head token index: 2
      | Label: P
      | Token text: world
      | Location of this token in overall document: 7
      | Part of Speech tag: NOUN
      | Voice: VOICE_UNKNOWN
      | Tense: TENSE_UNKNOWN
      | Lemma: world
      | Head token index: 2
      | Label: ROOT
      | Token text: !
      | Location of this token in overall document: 12
      | Part of Speech tag: PUNCT
      | Voice: VOICE_UNKNOWN
      | Tense: TENSE_UNKNOWN
      | Lemma: !
      | Head token index: 2
      | Label: P
      | Language of the text: en
      | 
      | ### Test case TEARDOWN
      | 
  RUNNING: Test suite: "Analyzing Entity Sentiment [code sample tests]"
    PASSED: Test case: "language_entity_sentiment_text - Analyzing Entity Sentiment of a text string (default value)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/language_entity_sentiment_text.py 
      | Representative name for the entity: Grapes
      | Entity type: OTHER
      | Salience score: 0.8335162997245789
      | Entity sentiment score: 0.8999999761581421
      | Entity sentiment magnitude: 0.8999999761581421
      | Mention text: Grapes
      | Mention type: COMMON
      | Representative name for the entity: Bananas
      | Entity type: OTHER
      | Salience score: 0.16648370027542114
      | Entity sentiment score: -0.8999999761581421
      | Entity sentiment magnitude: 0.8999999761581421
      | Mention text: Bananas
      | Mention type: COMMON
      | Language of the text: en
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "language_entity_sentiment_text - Analyzing Entity Sentiment of a text string (*custom value*)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/language_entity_sentiment_text.py --text_content="Grapes are actually not very good. But Bananas are great."
      | Representative name for the entity: Grapes
      | Entity type: OTHER
      | Salience score: 0.9395261406898499
      | Entity sentiment score: -0.800000011920929
      | Entity sentiment magnitude: 0.800000011920929
      | Mention text: Grapes
      | Mention type: COMMON
      | Representative name for the entity: Bananas
      | Entity type: OTHER
      | Salience score: 0.06047387048602104
      | Entity sentiment score: 0.8999999761581421
      | Entity sentiment magnitude: 0.8999999761581421
      | Mention text: Bananas
      | Mention type: COMMON
      | Language of the text: en
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "language_entity_sentiment_gcs - Analyzing Entity Sentiment of text file in GCS (default value)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/language_entity_sentiment_gcs.py 
      | Representative name for the entity: Grapes
      | Entity type: OTHER
      | Salience score: 0.8335162997245789
      | Entity sentiment score: 0.8999999761581421
      | Entity sentiment magnitude: 0.8999999761581421
      | Mention text: Grapes
      | Mention type: COMMON
      | Representative name for the entity: Bananas
      | Entity type: OTHER
      | Salience score: 0.16648370027542114
      | Entity sentiment score: -0.8999999761581421
      | Entity sentiment magnitude: 0.8999999761581421
      | Mention text: Bananas
      | Mention type: COMMON
      | Language of the text: en
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "language_entity_sentiment_gcs - Analyzing Entity Sentiment of text file in GCS (*custom value*)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/language_entity_sentiment_gcs.py --gcs_content_uri="gs://cloud-samples-data/language/entity-sentiment-reverse.txt"
      | Representative name for the entity: Grapes
      | Entity type: OTHER
      | Salience score: 0.9395261406898499
      | Entity sentiment score: -0.800000011920929
      | Entity sentiment magnitude: 0.800000011920929
      | Mention text: Grapes
      | Mention type: COMMON
      | Representative name for the entity: Bananas
      | Entity type: OTHER
      | Salience score: 0.06047387048602104
      | Entity sentiment score: 0.8999999761581421
      | Entity sentiment magnitude: 0.8999999761581421
      | Mention text: Bananas
      | Mention type: COMMON
      | Language of the text: en
      | 
      | ### Test case TEARDOWN
      | 
  RUNNING: Test suite: "Classifying Content [code sample tests]"
    PASSED: Test case: "language_classify_text - Classifying Content of a text string (default value)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/language_classify_text.py 
      | Category name: /Arts & Entertainment/TV & Video/TV Shows & Programs
      | Confidence: 0.5199999809265137
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "language_classify_text - Classifying Content of a text string (*custom value*)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/language_classify_text.py --text_content="Let's drink coffee and eat bagels at a coffee shop. I want muffins, croisants, coffee and baked goods."
      | Category name: /Food & Drink/Beverages/Coffee & Tea
      | Confidence: 0.8199999928474426
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "language_classify_gcs - Classifying Content of text file in GCS (default value)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/language_classify_gcs.py 
      | Category name: /Arts & Entertainment/Movies
      | Confidence: 0.9200000166893005
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "language_classify_gcs - Classifying Content of text file in GCS (*custom value*)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/language_classify_gcs.py --gcs_content_uri="gs://cloud-samples-data/language/android.txt"
      | Category name: /Computers & Electronics
      | Confidence: 0.800000011920929
      | Category name: /Internet & Telecom/Mobile & Wireless/Mobile Apps & Add-Ons
      | Confidence: 0.6499999761581421
      | 
      | ### Test case TEARDOWN
      | 
  RUNNING: Test suite: "Analyzing Sentiment [code sample tests]"
    PASSED: Test case: "language_sentiment_text - Analyzing the sentiment of a text string (default value)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/language_sentiment_text.py 
      | Document sentiment score: 0.8999999761581421
      | Document sentiment magnitude: 0.8999999761581421
      | Sentence text: I am so happy and joyful.
      | Sentence sentiment score: 0.8999999761581421
      | Sentence sentiment magnitude: 0.8999999761581421
      | Language of the text: en
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "language_sentiment_text - Analyzing the sentiment of a text string (*custom value*)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/language_sentiment_text.py --text_content="I am very happy. I am angry and sad."
      | Document sentiment score: 0.10000000149011612
      | Document sentiment magnitude: 1.2999999523162842
      | Sentence text: I am very happy.
      | Sentence sentiment score: 0.800000011920929
      | Sentence sentiment magnitude: 0.800000011920929
      | Sentence text: I am angry and sad.
      | Sentence sentiment score: -0.4000000059604645
      | Sentence sentiment magnitude: 0.4000000059604645
      | Language of the text: en
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "language_sentiment_gcs - Analyzing the sentiment of text file in GCS (default value)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/language_sentiment_gcs.py 
      | Document sentiment score: 0.8999999761581421
      | Document sentiment magnitude: 0.8999999761581421
      | Sentence text: I am so happy and joyful.
      | Sentence sentiment score: 0.8999999761581421
      | Sentence sentiment magnitude: 0.8999999761581421
      | Language of the text: en
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "language_sentiment_gcs - Analyzing the sentiment of text file in GCS (*custom value*)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/language_sentiment_gcs.py --gcs_content_uri="gs://cloud-samples-data/language/sentiment-negative.txt"
      | Document sentiment score: -0.6000000238418579
      | Document sentiment magnitude: 0.6000000238418579
      | Sentence text: I am so sad and upset.
      | Sentence sentiment score: -0.6000000238418579
      | Sentence sentiment magnitude: 0.6000000238418579
      | Language of the text: en
      | 
      | ### Test case TEARDOWN
      | 
  RUNNING: Test suite: "Analyzing Entities [code sample tests]"
    PASSED: Test case: "language_entities_text - Analyzing the Entities of a text string (default value)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/language_entities_text.py 
      | Representative name for the entity: California
      | Entity type: LOCATION
      | Salience score: 1.0
      | wikipedia_url: https://en.wikipedia.org/wiki/California
      | mid: /m/01n7q
      | Mention text: California
      | Mention type: PROPER
      | Mention text: state
      | Mention type: COMMON
      | Language of the text: en
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "language_entities_text - Analyzing the Entities of a text string (*custom value*)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/language_entities_text.py --text_content="Alice is a person. She lives in California."
      | Representative name for the entity: Alice
      | Entity type: PERSON
      | Salience score: 0.9694082140922546
      | Mention text: Alice
      | Mention type: PROPER
      | Mention text: person
      | Mention type: COMMON
      | Representative name for the entity: California
      | Entity type: LOCATION
      | Salience score: 0.030591759830713272
      | wikipedia_url: https://en.wikipedia.org/wiki/California
      | mid: /m/01n7q
      | Mention text: California
      | Mention type: PROPER
      | Language of the text: en
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "language_entities_text - Analyzing the Entities of a text string (*metadata attributes*)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/language_entities_text.py --text_content="I called 202-762-1401 on January 31, 2019 from 1600 Amphitheatre Parkway, Mountain View, CA."
      | Representative name for the entity: Mountain View
      | Entity type: LOCATION
      | Salience score: 0.46325093507766724
      | wikipedia_url: https://en.wikipedia.org/wiki/Mountain_View,_California
      | mid: /m/0r6c4
      | Mention text: Mountain View
      | Mention type: PROPER
      | Representative name for the entity: CA
      | Entity type: LOCATION
      | Salience score: 0.3285367786884308
      | wikipedia_url: https://en.wikipedia.org/wiki/California
      | mid: /m/01n7q
      | Mention text: CA
      | Mention type: PROPER
      | Representative name for the entity: Amphitheatre Parkway
      | Entity type: LOCATION
      | Salience score: 0.20821228623390198
      | mid: /g/1tf2sgcm
      | Mention text: Amphitheatre Parkway
      | Mention type: PROPER
      | Representative name for the entity: 202-762-1401
      | Entity type: PHONE_NUMBER
      | Salience score: 0.0
      | national_prefix: 1
      | area_code: 202
      | number: 7621401
      | Mention text: 202-762-1401
      | Mention type: TYPE_UNKNOWN
      | Representative name for the entity: January 31, 2019
      | Entity type: DATE
      | Salience score: 0.0
      | year: 2019
      | month: 1
      | day: 31
      | Mention text: January 31, 2019
      | Mention type: TYPE_UNKNOWN
      | Representative name for the entity: 1600 Amphitheatre Parkway, Mountain View, CA
      | Entity type: ADDRESS
      | Salience score: 0.0
      | street_name: Amphitheatre Parkway
      | broad_region: California
      | country: US
      | narrow_region: Santa Clara County
      | locality: Mountain View
      | street_number: 1600
      | Mention text: 1600 Amphitheatre Parkway, Mountain View, CA
      | Mention type: TYPE_UNKNOWN
      | Representative name for the entity: 1401
      | Entity type: NUMBER
      | Salience score: 0.0
      | value: 1401
      | Mention text: 1401
      | Mention type: TYPE_UNKNOWN
      | Representative name for the entity: 31
      | Entity type: NUMBER
      | Salience score: 0.0
      | value: 31
      | Mention text: 31
      | Mention type: TYPE_UNKNOWN
      | Representative name for the entity: 2019
      | Entity type: NUMBER
      | Salience score: 0.0
      | value: 2019
      | Mention text: 2019
      | Mention type: TYPE_UNKNOWN
      | Representative name for the entity: 1600
      | Entity type: NUMBER
      | Salience score: 0.0
      | value: 1600
      | Mention text: 1600
      | Mention type: TYPE_UNKNOWN
      | Representative name for the entity: 762
      | Entity type: NUMBER
      | Salience score: 0.0
      | value: 762
      | Mention text: 762
      | Mention type: TYPE_UNKNOWN
      | Representative name for the entity: 202
      | Entity type: NUMBER
      | Salience score: 0.0
      | value: 202
      | Mention text: 202
      | Mention type: TYPE_UNKNOWN
      | Language of the text: en
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "language_entities_gcs - Analyzing the Entities of text file in GCS (default value)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/language_entities_gcs.py 
      | Representative name for the entity: California
      | Entity type: LOCATION
      | Salience score: 1.0
      | mid: /m/01n7q
      | wikipedia_url: https://en.wikipedia.org/wiki/California
      | Mention text: California
      | Mention type: PROPER
      | Mention text: state
      | Mention type: COMMON
      | Language of the text: en
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "language_entities_gcs - Analyzing the Entities of text file in GCS (*custom value*)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/language_entities_gcs.py --gcs_content_uri="gs://cloud-samples-data/language/entity-sentiment.txt"
      | Representative name for the entity: Grapes
      | Entity type: OTHER
      | Salience score: 0.8335162997245789
      | Mention text: Grapes
      | Mention type: COMMON
      | Representative name for the entity: Bananas
      | Entity type: OTHER
      | Salience score: 0.16648370027542114
      | Mention text: Bananas
      | Mention type: COMMON
      | Language of the text: en
      | 
      | ### Test case TEARDOWN
      | 

Tests passed
🗣 Speech-to-Text
RUNNING: Test environment: ""
  RUNNING: Test suite: "Transcript Audio File using Long Running Operation (Cloud Storage) (LRO)"
    PASSED: Test case: "speech_transcribe_async_gcs (no arguments)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/speech_transcribe_async_gcs.py 
      | Waiting for operation to complete...
      | Transcript: how old is the Brooklyn Bridge
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "speech_transcribe_async_gcs (--storage_uri)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/speech_transcribe_async_gcs.py --storage_uri="gs://cloud-samples-data/speech/hello.raw"
      | Waiting for operation to complete...
      | Transcript: hello
      | 
      | ### Test case TEARDOWN
      | 
  RUNNING: Test suite: "Adding recognition metadata (Local File) (Beta)"
    PASSED: Test case: "speech_transcribe_recognition_metadata_beta (no arguments)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1p1beta1/speech_transcribe_recognition_metadata_beta.py 
      | Transcript: I'm here
      | Transcript:  hi I'd like to buy a Chrome Cast and I was wondering whether you could help me with that
      | Transcript:  Hulu which color would you like we have blue black and breath
      | Transcript:  let's get the black one
      | Transcript:  okay Chris would you like the New Concord Ultra model or the regular Comcast
      | Transcript:  regular Chrome Cast design
      | Transcript:  okay sure would you like to ship it regular or Express
      | Transcript:  Express please
      | Transcript:  terrific it's on the way thank you very much thank you
      | Transcript:  bye
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "speech_transcribe_recognition_metadata_beta (--local_file_path)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1p1beta1/speech_transcribe_recognition_metadata_beta.py --local_file_path="resources/brooklyn_bridge.flac"
      | Transcript: how old is the Brooklyn Bridge
      | 
      | ### Test case TEARDOWN
      | 
  RUNNING: Test suite: "Speech-to-Text Sample Tests For Speech Adaptation"
    PASSED: Test case: "speech_adaptation_beta"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1p1beta1/speech_adaptation_beta.py 
      | Transcript: how old is the Brooklyn Bridge
      | 
      | ### Test case TEARDOWN
      | 
  RUNNING: Test suite: "Getting punctuation in results (Local File) (Beta)"
    PASSED: Test case: "speech_transcribe_auto_punctuation_beta (no arguments)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1p1beta1/speech_transcribe_auto_punctuation_beta.py 
      | Transcript: I'm here.
      | Transcript:  Hi, I'd like to buy a Chrome Cast and I was wondering whether you could help me with that.
      | Transcript:  Hulu which color would you like? We have blue black and breath
      | Transcript:  Let's get the black one.
      | Transcript:  Okay, Chris, would you like the New Concord Ultra model or the regular Comcast?
      | Transcript:  regular Chrome Cast design
      | Transcript:  Okay. Sure. Would you like to ship it regular or Express?
      | Transcript:  Express, please.
      | Transcript:  Terrific. It's on the way. Thank you very much. Thank you.
      | Transcript:  Bye.
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "speech_transcribe_auto_punctuation_beta (--local_file_path)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1p1beta1/speech_transcribe_auto_punctuation_beta.py --local_file_path="resources/brooklyn_bridge.flac"
      | Transcript: How old is the Brooklyn Bridge?
      | 
      | ### Test case TEARDOWN
      | 
  RUNNING: Test suite: "Using Enhanced Models (Local File)"
    PASSED: Test case: "speech_transcribe_enhanced_model (no arguments)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/speech_transcribe_enhanced_model.py 
      | Transcript: hello
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "speech_transcribe_enhanced_model (--local_file_path)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/speech_transcribe_enhanced_model.py --local_file_path="resources/commercial_mono.wav"
      | Transcript: okay I'm here
      | Transcript:  hi I'd like to buy a Chromecast and I was wondering whether you could help me with that
      | Transcript:  certainly which color would you like we have blue black and red
      | Transcript:  let's get the black one
      | Transcript:  okay great would you like the new Chromecast Ultra model or the regular Chromecast
      | Transcript:  regular Chromecast is fine
      | Transcript:  okay sure would you like to ship it regular or Express
      | Transcript:  Express please
      | Transcript:  terrific it's on the way thank you very much thank you
      | Transcript:  bye
      | 
      | ### Test case TEARDOWN
      | 
  RUNNING: Test suite: "Multi-Channel Audio Transcription (Cloud Storage)"
    PASSED: Test case: "speech_transcribe_multichannel_gcs (no arguments)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/speech_transcribe_multichannel_gcs.py 
      | Channel tag: 2
      | Transcript: how are you doing still being too
      | Channel tag: 1
      | Transcript: how are you doing estoy bien e tu
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "speech_transcribe_multichannel_gcs (--storage_uri)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/speech_transcribe_multichannel_gcs.py --storage_uri="gs://cloud-samples-data/speech/brooklyn_bridge.wav"
      | Channel tag: 1
      | Transcript: how old is the Brooklyn Bridge
      | Channel tag: 2
      | Transcript: how old is the Brooklyn Bridge
      | 
      | ### Test case TEARDOWN
      | 
  RUNNING: Test suite: "Selecting a Transcription Model (Local File)"
    PASSED: Test case: "speech_transcribe_model_selection (no arguments)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/speech_transcribe_model_selection.py 
      | Transcript: Hello.
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "speech_transcribe_model_selection (--local_file_path)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/speech_transcribe_model_selection.py --local_file_path="resources/commercial_mono.wav"
      | Transcript: I'm here.
      | Transcript:  Hi, I'd like to buy a Chrome Cast and I was wondering whether you could help me with that.
      | Transcript:  Hulu which color would you like? We have blue black and breath
      | Transcript:  Let's get the black one.
      | Transcript:  Okay, Chris, would you like the New Concord Ultra model or the regular Comcast?
      | Transcript:  regular Chrome Cast design
      | Transcript:  Okay. Sure. Would you like to ship it regular or Express?
      | Transcript:  Express, please.
      | Transcript:  Terrific. It's on the way. Thank you very much. Thank you.
      | Transcript:  Bye.
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "speech_transcribe_model_selection (--model)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/speech_transcribe_model_selection.py --model="video"
      | Transcript: hello
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "speech_transcribe_model_selection (invalid --model)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/speech_transcribe_model_selection.py --model="I_DONT_EXIST"
      | # ... call did not succeed  Traceback (most recent call last):
      |   File "/Users/rebeccataylor/.pyenv/versions/3.6.8/lib/python3.6/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
      |     return callable_(*args, **kwargs)
      |   File "/Users/rebeccataylor/.pyenv/versions/3.6.8/lib/python3.6/site-packages/grpc/_channel.py", line 562, in __call__
      |     return _end_unary_response_blocking(state, call, False, None)
      |   File "/Users/rebeccataylor/.pyenv/versions/3.6.8/lib/python3.6/site-packages/grpc/_channel.py", line 466, in _end_unary_response_blocking
      |     raise _Rendezvous(state, None, None, deadline)
      | grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
      | 	status = StatusCode.INVALID_ARGUMENT
      | 	details = "Invalid recognition 'config': Incorrect model specified. Please refer to the documentation page for valid model names."
      | 	debug_error_string = "{"created":"@1567126803.837692000","description":"Error received from peer ipv4:172.217.14.202:443","file":"src/core/lib/surface/call.cc","file_line":1041,"grpc_message":"Invalid recognition 'config': Incorrect model specified. Please refer to the documentation page for valid model names.","grpc_status":3}"
      | >
      | 
      | The above exception was the direct cause of the following exception:
      | 
      | Traceback (most recent call last):
      |   File "./v1/speech_transcribe_model_selection.py", line 77, in <module>
      |     main()
      |   File "./v1/speech_transcribe_model_selection.py", line 73, in main
      |     sample_recognize(args.local_file_path, args.model)
      |   File "./v1/speech_transcribe_model_selection.py", line 55, in sample_recognize
      |     response = client.recognize(config, audio)
      |   File "/Users/rebeccataylor/.pyenv/versions/3.6.8/lib/python3.6/site-packages/google/cloud/speech_v1/gapic/speech_client.py", line 241, in recognize
      |     request, retry=retry, timeout=timeout, metadata=metadata
      |   File "/Users/rebeccataylor/.pyenv/versions/3.6.8/lib/python3.6/site-packages/google/api_core/gapic_v1/method.py", line 143, in __call__
      |     return wrapped_func(*args, **kwargs)
      |   File "/Users/rebeccataylor/.pyenv/versions/3.6.8/lib/python3.6/site-packages/google/api_core/retry.py", line 273, in retry_wrapped_func
      |     on_error=on_error,
      |   File "/Users/rebeccataylor/.pyenv/versions/3.6.8/lib/python3.6/site-packages/google/api_core/retry.py", line 182, in retry_target
      |     return target()
      |   File "/Users/rebeccataylor/.pyenv/versions/3.6.8/lib/python3.6/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout
      |     return func(*args, **kwargs)
      |   File "/Users/rebeccataylor/.pyenv/versions/3.6.8/lib/python3.6/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable
      |     six.raise_from(exceptions.from_grpc_error(exc), exc)
      |   File "<string>", line 3, in raise_from
      | google.api_core.exceptions.InvalidArgument: 400 Invalid recognition 'config': Incorrect model specified. Please refer to the documentation page for valid model names.
      | 
      | ### Test case TEARDOWN
      | 
  RUNNING: Test suite: "Getting word timestamps (Cloud Storage) (LRO)"
    PASSED: Test case: "speech_transcribe_async_word_time_offsets_gcs (no arguments)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/speech_transcribe_async_word_time_offsets_gcs.py 
      | Waiting for operation to complete...
      | Transcript: how old is the Brooklyn Bridge
      | Word: how
      | Start time: 0 seconds 0 nanos
      | End time: 0 seconds 300000000 nanos
      | Word: old
      | Start time: 0 seconds 300000000 nanos
      | End time: 0 seconds 600000000 nanos
      | Word: is
      | Start time: 0 seconds 600000000 nanos
      | End time: 0 seconds 800000000 nanos
      | Word: the
      | Start time: 0 seconds 800000000 nanos
      | End time: 0 seconds 900000000 nanos
      | Word: Brooklyn
      | Start time: 0 seconds 900000000 nanos
      | End time: 1 seconds 100000000 nanos
      | Word: Bridge
      | Start time: 1 seconds 100000000 nanos
      | End time: 1 seconds 400000000 nanos
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "speech_transcribe_async_word_time_offsets_gcs (--storage_uri)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/speech_transcribe_async_word_time_offsets_gcs.py --storage_uri="gs://cloud-samples-data/speech/multi.flac"
      | Waiting for operation to complete...
      | Transcript: how are you doing still being too
      | Word: how
      | Start time: 0 seconds 600000000 nanos
      | End time: 1 seconds 400000000 nanos
      | Word: are
      | Start time: 1 seconds 400000000 nanos
      | End time: 1 seconds 600000000 nanos
      | Word: you
      | Start time: 1 seconds 600000000 nanos
      | End time: 1 seconds 700000000 nanos
      | Word: doing
      | Start time: 1 seconds 700000000 nanos
      | End time: 1 seconds 800000000 nanos
      | Word: still
      | Start time: 1 seconds 800000000 nanos
      | End time: 3 seconds 100000000 nanos
      | Word: being
      | Start time: 3 seconds 100000000 nanos
      | End time: 3 seconds 300000000 nanos
      | Word: too
      | Start time: 3 seconds 300000000 nanos
      | End time: 3 seconds 900000000 nanos
      | 
      | ### Test case TEARDOWN
      | 
  RUNNING: Test suite: "Multi-Channel Audio Transcription (Local File)"
    PASSED: Test case: "speech_transcribe_multichannel (no arguments)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/speech_transcribe_multichannel.py 
      | Channel tag: 2
      | Transcript: how are you doing still being too
      | Channel tag: 1
      | Transcript: how are you doing estoy bien e tu
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "speech_transcribe_multichannel (--local_file_path)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/speech_transcribe_multichannel.py --local_file_path="resources/brooklyn_bridge.wav"
      | Channel tag: 1
      | Transcript: how old is the Brooklyn Bridge
      | Channel tag: 2
      | Transcript: how old is the Brooklyn Bridge
      | 
      | ### Test case TEARDOWN
      | 
  RUNNING: Test suite: "Transcript Audio File (Cloud Storage)"
    PASSED: Test case: "speech_transcribe_sync_gcs (no arguments)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/speech_transcribe_sync_gcs.py 
      | Transcript: how old is the Brooklyn Bridge
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "speech_transcribe_sync_gcs (--storage_uri)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/speech_transcribe_sync_gcs.py --storage_uri="gs://cloud-samples-data/speech/hello.raw"
      | Transcript: hello
      | 
      | ### Test case TEARDOWN
      | 
  RUNNING: Test suite: "Separating different speakers (Local File) (LRO) (Beta)"
    PASSED: Test case: "speech_transcribe_diarization_beta (no arguments)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1p1beta1/speech_transcribe_diarization_beta.py 
      | Waiting for operation to complete...
      | Transcript: I'm here
      | Word: I'm
      | Speaker tag: 0
      | Word: here
      | Speaker tag: 0
      | Transcript:  hi I'd like to buy a Chrome Cast and I was wondering whether you could help me with that
      | Word: hi
      | Speaker tag: 0
      | Word: I'd
      | Speaker tag: 0
      | Word: like
      | Speaker tag: 0
      | Word: to
      | Speaker tag: 0
      | Word: buy
      | Speaker tag: 0
      | Word: a
      | Speaker tag: 0
      | Word: Chrome
      | Speaker tag: 0
      | Word: Cast
      | Speaker tag: 0
      | Word: and
      | Speaker tag: 0
      | Word: I
      | Speaker tag: 0
      | Word: was
      | Speaker tag: 0
      | Word: wondering
      | Speaker tag: 0
      | Word: whether
      | Speaker tag: 0
      | Word: you
      | Speaker tag: 0
      | Word: could
      | Speaker tag: 0
      | Word: help
      | Speaker tag: 0
      | Word: me
      | Speaker tag: 0
      | Word: with
      | Speaker tag: 0
      | Word: that
      | Speaker tag: 0
      | Transcript:  Hulu which color would you like we have blue black and breath
      | Word: Hulu
      | Speaker tag: 0
      | Word: which
      | Speaker tag: 0
      | Word: color
      | Speaker tag: 0
      | Word: would
      | Speaker tag: 0
      | Word: you
      | Speaker tag: 0
      | Word: like
      | Speaker tag: 0
      | Word: we
      | Speaker tag: 0
      | Word: have
      | Speaker tag: 0
      | Word: blue
      | Speaker tag: 0
      | Word: black
      | Speaker tag: 0
      | Word: and
      | Speaker tag: 0
      | Word: breath
      | Speaker tag: 0
      | Transcript:  let's get the black one
      | Word: let's
      | Speaker tag: 0
      | Word: get
      | Speaker tag: 0
      | Word: the
      | Speaker tag: 0
      | Word: black
      | Speaker tag: 0
      | Word: one
      | Speaker tag: 0
      | Transcript:  okay Chris would you like the New Concord Ultra model or the regular Comcast
      | Word: okay
      | Speaker tag: 0
      | Word: Chris
      | Speaker tag: 0
      | Word: would
      | Speaker tag: 0
      | Word: you
      | Speaker tag: 0
      | Word: like
      | Speaker tag: 0
      | Word: the
      | Speaker tag: 0
      | Word: New
      | Speaker tag: 0
      | Word: Concord
      | Speaker tag: 0
      | Word: Ultra
      | Speaker tag: 0
      | Word: model
      | Speaker tag: 0
      | Word: or
      | Speaker tag: 0
      | Word: the
      | Speaker tag: 0
      | Word: regular
      | Speaker tag: 0
      | Word: Comcast
      | Speaker tag: 0
      | Transcript:  regular Chrome Cast design
      | Word: regular
      | Speaker tag: 0
      | Word: Chrome
      | Speaker tag: 0
      | Word: Cast
      | Speaker tag: 0
      | Word: design
      | Speaker tag: 0
      | Transcript:  okay sure would you like to ship it regular or Express
      | Word: okay
      | Speaker tag: 0
      | Word: sure
      | Speaker tag: 0
      | Word: would
      | Speaker tag: 0
      | Word: you
      | Speaker tag: 0
      | Word: like
      | Speaker tag: 0
      | Word: to
      | Speaker tag: 0
      | Word: ship
      | Speaker tag: 0
      | Word: it
      | Speaker tag: 0
      | Word: regular
      | Speaker tag: 0
      | Word: or
      | Speaker tag: 0
      | Word: Express
      | Speaker tag: 0
      | Transcript:  Express please
      | Word: Express
      | Speaker tag: 0
      | Word: please
      | Speaker tag: 0
      | Transcript:  terrific it's on the way thank you very much thank you
      | Word: terrific
      | Speaker tag: 0
      | Word: it's
      | Speaker tag: 0
      | Word: on
      | Speaker tag: 0
      | Word: the
      | Speaker tag: 0
      | Word: way
      | Speaker tag: 0
      | Word: thank
      | Speaker tag: 0
      | Word: you
      | Speaker tag: 0
      | Word: very
      | Speaker tag: 0
      | Word: much
      | Speaker tag: 0
      | Word: thank
      | Speaker tag: 0
      | Word: you
      | Speaker tag: 0
      | Transcript:  bye
      | Word: bye
      | Speaker tag: 0
      | Transcript:  bye
      | Word: I'm
      | Speaker tag: 1
      | Word: here
      | Speaker tag: 1
      | Word: hi
      | Speaker tag: 2
      | Word: I'd
      | Speaker tag: 2
      | Word: like
      | Speaker tag: 2
      | Word: to
      | Speaker tag: 2
      | Word: buy
      | Speaker tag: 2
      | Word: a
      | Speaker tag: 2
      | Word: Chrome
      | Speaker tag: 2
      | Word: Cast
      | Speaker tag: 2
      | Word: and
      | Speaker tag: 2
      | Word: I
      | Speaker tag: 2
      | Word: was
      | Speaker tag: 2
      | Word: wondering
      | Speaker tag: 2
      | Word: whether
      | Speaker tag: 2
      | Word: you
      | Speaker tag: 2
      | Word: could
      | Speaker tag: 2
      | Word: help
      | Speaker tag: 2
      | Word: me
      | Speaker tag: 2
      | Word: with
      | Speaker tag: 1
      | Word: that
      | Speaker tag: 1
      | Word: Hulu
      | Speaker tag: 1
      | Word: which
      | Speaker tag: 1
      | Word: color
      | Speaker tag: 1
      | Word: would
      | Speaker tag: 1
      | Word: you
      | Speaker tag: 1
      | Word: like
      | Speaker tag: 1
      | Word: we
      | Speaker tag: 1
      | Word: have
      | Speaker tag: 1
      | Word: blue
      | Speaker tag: 1
      | Word: black
      | Speaker tag: 1
      | Word: and
      | Speaker tag: 1
      | Word: breath
      | Speaker tag: 2
      | Word: let's
      | Speaker tag: 2
      | Word: get
      | Speaker tag: 2
      | Word: the
      | Speaker tag: 2
      | Word: black
      | Speaker tag: 2
      | Word: one
      | Speaker tag: 1
      | Word: okay
      | Speaker tag: 1
      | Word: Chris
      | Speaker tag: 1
      | Word: would
      | Speaker tag: 1
      | Word: you
      | Speaker tag: 1
      | Word: like
      | Speaker tag: 1
      | Word: the
      | Speaker tag: 1
      | Word: New
      | Speaker tag: 1
      | Word: Concord
      | Speaker tag: 1
      | Word: Ultra
      | Speaker tag: 1
      | Word: model
      | Speaker tag: 1
      | Word: or
      | Speaker tag: 1
      | Word: the
      | Speaker tag: 1
      | Word: regular
      | Speaker tag: 2
      | Word: Comcast
      | Speaker tag: 2
      | Word: regular
      | Speaker tag: 2
      | Word: Chrome
      | Speaker tag: 2
      | Word: Cast
      | Speaker tag: 2
      | Word: design
      | Speaker tag: 1
      | Word: okay
      | Speaker tag: 1
      | Word: sure
      | Speaker tag: 1
      | Word: would
      | Speaker tag: 1
      | Word: you
      | Speaker tag: 1
      | Word: like
      | Speaker tag: 1
      | Word: to
      | Speaker tag: 1
      | Word: ship
      | Speaker tag: 1
      | Word: it
      | Speaker tag: 1
      | Word: regular
      | Speaker tag: 1
      | Word: or
      | Speaker tag: 1
      | Word: Express
      | Speaker tag: 2
      | Word: Express
      | Speaker tag: 2
      | Word: please
      | Speaker tag: 2
      | Word: terrific
      | Speaker tag: 2
      | Word: it's
      | Speaker tag: 2
      | Word: on
      | Speaker tag: 1
      | Word: the
      | Speaker tag: 1
      | Word: way
      | Speaker tag: 1
      | Word: thank
      | Speaker tag: 1
      | Word: you
      | Speaker tag: 1
      | Word: very
      | Speaker tag: 1
      | Word: much
      | Speaker tag: 1
      | Word: thank
      | Speaker tag: 2
      | Word: you
      | Speaker tag: 2
      | Word: bye
      | Speaker tag: 2
      | Word: bye
      | Speaker tag: 2
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "speech_transcribe_diarization_beta (--local_file_path)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1p1beta1/speech_transcribe_diarization_beta.py --local_file_path="resources/multi.flac"
      | Waiting for operation to complete...
      | Transcript: how are you doing still being too
      | Word: how
      | Speaker tag: 1
      | Word: are
      | Speaker tag: 1
      | Word: you
      | Speaker tag: 1
      | Word: doing
      | Speaker tag: 1
      | Word: still
      | Speaker tag: 1
      | Word: being
      | Speaker tag: 1
      | Word: too
      | Speaker tag: 1
      | 
      | ### Test case TEARDOWN
      | 
  RUNNING: Test suite: "Selecting a Transcription Model (Cloud Storage)"
    PASSED: Test case: "speech_transcribe_model_selection_gcs (no arguments)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/speech_transcribe_model_selection_gcs.py 
      | Transcript: Hello.
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "speech_transcribe_model_selection_gcs (--local_file_path)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/speech_transcribe_model_selection_gcs.py --storage_uri="gs://cloud-samples-data/speech/commercial_mono.wav"
      | Transcript: I'm here.
      | Transcript:  Hi, I'd like to buy a Chrome Cast and I was wondering whether you could help me with that.
      | Transcript:  Hulu which color would you like? We have blue black and breath
      | Transcript:  Let's get the black one.
      | Transcript:  Okay, Chris, would you like the New Concord Ultra model or the regular Comcast?
      | Transcript:  regular Chrome Cast design
      | Transcript:  Okay. Sure. Would you like to ship it regular or Express?
      | Transcript:  Express, please.
      | Transcript:  Terrific. It's on the way. Thank you very much. Thank you.
      | Transcript:  Bye.
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "speech_transcribe_model_selection_gcs (--model)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/speech_transcribe_model_selection_gcs.py --model="video"
      | Transcript: hello
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "speech_transcribe_model_selection_gcs (invalid --model)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/speech_transcribe_model_selection_gcs.py --model="I_DONT_EXIST"
      | # ... call did not succeed  Traceback (most recent call last):
      |   File "/Users/rebeccataylor/.pyenv/versions/3.6.8/lib/python3.6/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
      |     return callable_(*args, **kwargs)
      |   File "/Users/rebeccataylor/.pyenv/versions/3.6.8/lib/python3.6/site-packages/grpc/_channel.py", line 562, in __call__
      |     return _end_unary_response_blocking(state, call, False, None)
      |   File "/Users/rebeccataylor/.pyenv/versions/3.6.8/lib/python3.6/site-packages/grpc/_channel.py", line 466, in _end_unary_response_blocking
      |     raise _Rendezvous(state, None, None, deadline)
      | grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
      | 	status = StatusCode.INVALID_ARGUMENT
      | 	details = "Invalid recognition 'config': Incorrect model specified. Please refer to the documentation page for valid model names."
      | 	debug_error_string = "{"created":"@1567126862.239434000","description":"Error received from peer ipv4:172.217.14.202:443","file":"src/core/lib/surface/call.cc","file_line":1041,"grpc_message":"Invalid recognition 'config': Incorrect model specified. Please refer to the documentation page for valid model names.","grpc_status":3}"
      | >
      | 
      | The above exception was the direct cause of the following exception:
      | 
      | Traceback (most recent call last):
      |   File "./v1/speech_transcribe_model_selection_gcs.py", line 78, in <module>
      |     main()
      |   File "./v1/speech_transcribe_model_selection_gcs.py", line 74, in main
      |     sample_recognize(args.storage_uri, args.model)
      |   File "./v1/speech_transcribe_model_selection_gcs.py", line 54, in sample_recognize
      |     response = client.recognize(config, audio)
      |   File "/Users/rebeccataylor/.pyenv/versions/3.6.8/lib/python3.6/site-packages/google/cloud/speech_v1/gapic/speech_client.py", line 241, in recognize
      |     request, retry=retry, timeout=timeout, metadata=metadata
      |   File "/Users/rebeccataylor/.pyenv/versions/3.6.8/lib/python3.6/site-packages/google/api_core/gapic_v1/method.py", line 143, in __call__
      |     return wrapped_func(*args, **kwargs)
      |   File "/Users/rebeccataylor/.pyenv/versions/3.6.8/lib/python3.6/site-packages/google/api_core/retry.py", line 273, in retry_wrapped_func
      |     on_error=on_error,
      |   File "/Users/rebeccataylor/.pyenv/versions/3.6.8/lib/python3.6/site-packages/google/api_core/retry.py", line 182, in retry_target
      |     return target()
      |   File "/Users/rebeccataylor/.pyenv/versions/3.6.8/lib/python3.6/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout
      |     return func(*args, **kwargs)
      |   File "/Users/rebeccataylor/.pyenv/versions/3.6.8/lib/python3.6/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable
      |     six.raise_from(exceptions.from_grpc_error(exc), exc)
      |   File "<string>", line 3, in raise_from
      | google.api_core.exceptions.InvalidArgument: 400 Invalid recognition 'config': Incorrect model specified. Please refer to the documentation page for valid model names.
      | 
      | ### Test case TEARDOWN
      | 
  RUNNING: Test suite: "Transcribe Audio File (Local File)"
    PASSED: Test case: "speech_transcribe_sync (no arguments)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/speech_transcribe_sync.py 
      | Transcript: how old is the Brooklyn Bridge
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "speech_transcribe_sync (--local_file_path)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/speech_transcribe_sync.py --local_file_path="resources/hello.raw"
      | Transcript: hello
      | 
      | ### Test case TEARDOWN
      | 
  RUNNING: Test suite: "Speech-to-Text Sample Tests For Quickstart"
    PASSED: Test case: "speech_quickstart_beta"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1p1beta1/speech_quickstart_beta.py 
      | Transcript: how old is the Brooklyn Bridge
      | 
      | ### Test case TEARDOWN
      | 
  RUNNING: Test suite: "Enabling word-level confidence (Local File) (Beta)"
    PASSED: Test case: "speech_transcribe_word_level_confidence_beta (no arguments)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1p1beta1/speech_transcribe_word_level_confidence_beta.py 
      | Transcript: how old is the Brooklyn Bridge
      | Word: how
      | Confidence: 0.9876290559768677
      | Word: old
      | Confidence: 0.9703859090805054
      | Word: is
      | Confidence: 0.9821367859840393
      | Word: the
      | Confidence: 0.9821367859840393
      | Word: Brooklyn
      | Confidence: 0.9876290559768677
      | Word: Bridge
      | Confidence: 0.9876290559768677
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "speech_transcribe_word_level_confidence_beta (--local_file_path)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1p1beta1/speech_transcribe_word_level_confidence_beta.py --local_file_path="resources/multi.flac"
      | Transcript: how are you doing still being too
      | Word: how
      | Confidence: 0.9876290559768677
      | Word: are
      | Confidence: 0.9876290559768677
      | Word: you
      | Confidence: 0.9876290559768677
      | Word: doing
      | Confidence: 0.9876290559768677
      | Word: still
      | Confidence: 0.7945960164070129
      | Word: being
      | Confidence: 0.8104138970375061
      | Word: too
      | Confidence: 0.8104138970375061
      | 
      | ### Test case TEARDOWN
      | 
  RUNNING: Test suite: "Detecting language spoken automatically (Local File) (Beta)"
    PASSED: Test case: "speech_transcribe_multilanguage_beta (no arguments)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1p1beta1/speech_transcribe_multilanguage_beta.py 
      | Detected language: en-us
      | Transcript: how old is the Brooklyn Bridge
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "speech_transcribe_multilanguage_beta (--local_file_path)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1p1beta1/speech_transcribe_multilanguage_beta.py --local_file_path="resources/multi.flac"
      | Detected language: en-us
      | Transcript: how are you doing still being too
      | 
      | ### Test case TEARDOWN
      | 
  RUNNING: Test suite: "Transcribe Audio File using Long Running Operation (Local File) (LRO)"
    PASSED: Test case: "speech_transcribe_async (no arguments)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/speech_transcribe_async.py 
      | Waiting for operation to complete...
      | Transcript: how old is the Brooklyn Bridge
      | 
      | ### Test case TEARDOWN
      | 
    PASSED: Test case: "speech_transcribe_async (--local_file_path)"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/speech_transcribe_async.py --local_file_path="resources/hello.raw"
      | Waiting for operation to complete...
      | Transcript: hello
      | 
      | ### Test case TEARDOWN
      | 
  RUNNING: Test suite: "Speech-to-Text Sample Tests For Speech Contexts Static Classes"
    PASSED: Test case: "speech_contexts_classes_beta"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1p1beta1/speech_contexts_classes_beta.py 
      | Transcript: the time is 5:45 p.m.
      | 
      | ### Test case TEARDOWN
      | 

Tests passed

@busunkim96 et al ~ ready for re-review! 🔍

I'll happily update this PR to include changes to Kokoro files.
I don't know how the Kokoro setup works, though :)
I could file a separate issue for a Feature Request to edit the Kokoro config?
I'm looking for your guidance here 🤓
I set "Allow edits from maintainers" on this Pull Request if useful :)

✨ Using a brand new Pull Request to keep things clean. Original Pull Request: #8210

@googleapis/samplegen


📌 Notes

🐞 Tracking issue for cross-language samplegen support

googleapis/gapic-generator#2954

🤖 Sample generation updates

Highlights since the original Pull Request:

  • Sample configs are now defined in their own .yaml files (no longer part of _gapic.yaml)
  • Sample config format changed, now an officially supported v1.2.0 versioned schema
  • Sample generation work in micro generators is well underway
  • SynthTool has an include_samples=True option for generating with samplegen
  • Sample Tester – various usability improvements
    • Much easier to run, now simply run $ sample-tester or $ sample-tester [dirs or files..]
    • Fighting fragility / battling brittleness
      • To be a part of client library's test suits, these can't be brittle!
      • By default, all assertions are case insensitive
      • assert_contains_any allows providing multiple values and the test passes if any one of those values is present (particularly useful for testing ML responses where the responses may change over time but DPEs can provide a list of sane values where, if none of those values are present, there's a very high likelihood that the API or sample is not working)
    • Specifically in response to the previous PHP PR feedback:
      • sample-tester test suites can now safely be executed from any directory and it will not impact the results (commands are locked to correct directories for executing samples).
      • You can also any number of directories and tests are automatically located and loaded. This should greatly simplify CI Kokoro setup :)
📑 Internal document

go/library-samples

💻 Try it yourself!

It's now easy to generate code samples demonstrating calling single API endpoints using Google Cloud client libraries.

🤖 Sample Generation Quickstart – 🐍 Python version

For those who are interested / if it's helpful / for future reference, find steps below for adding a new code sample with tests to an existing client library.

📂 Locations and Tools

Generated sample development follows the same pattern as generated library development:

  • Source files are stored in googleapis/
    • .proto files with annotations
    • gapic/artman/service configuration files
    • and now code sample configuration files too!
  • SynthTool is used to generate code from sources in googleapis/
    • generates library code
    • and now also code sample source code too!
1️⃣2️⃣3️⃣ Steps for generated sample authoring

Steps:

  • Choose a library (must be raw GAPIC generated library)
  • Enable sample generation (gapic.py_library(..., include_samples=True))
  • Create new sample config file (googleapis/google/[api root]/[version]/samples/*.yaml)
  • Generate PHP source code for new sample in samples/ (using synthtool)
  • Create new sample test (googleapis/google/[api root]/[version]/samples/test/*.test.yaml)
  • Run the tests ($ sample-tester)
⌨️ Dependencies
  • Install latest SynthTool:
    python3 -m pip install git+https://github.com/googleapis/synthtool.git
  • Install sample-tester:
    python3 -m pip install sample-tester
🖥 Example: add new sample with tests

Example: Add a new sample demonstrating the AnalyzeEntities method of the Natural Language API.

1. Setup googleapis directory

You will need a local copy of googleapis. Samples are configured in googleapis, alongside the API definition proto files, service configuration yaml files, and existing library configuration files. (e.g. artman configuration yaml files)

$ git clone https://github.com/googleapis/googleapis.git
$ cd googleapis/

Any googleapis/ directory can be used, e.g. public GitHub repo or an internal directory.

2. Browse existing sample configs in googleapis

Browse to the directory in googleapis where the Natural Language files are located:

$ cd google/cloud/language/

You can find the existing samples for Natural Language in google/cloud/language/v1/samples/

$ tree v1/samples/

v1/samples/
├── language_classify_gcs.yaml
├── language_classify_text.yaml
├── language_entities_gcs.yaml
├── language_entities_text.yaml
├── ...
└── test
    ├── analyzing_entities.test.yaml
    ├── ...
    └── classifying_content.test.yaml
3. Create a new sample config in googleapis

Create a new sample configuration: v1/samples/my_entity_sample.yaml

$ cat <<EOT >> v1/samples/my_entity_sample.yaml

type: com.google.api.codegen.samplegen.v1p2.SampleConfigProto
schema_version: 1.2.0
samples:
- region_tag: my_entity_sample
  description: "Demonstrate Analyzing Entities!"
  service: google.cloud.language.v1.LanguageService
  rpc: AnalyzeEntities
  request:
  - { field: document.content, value: "California is a state." }
  - { field: document.type, value: PLAIN_TEXT } 
  response:
  - print: ["Got a response from the API!"]
  - loop:
      collection: $resp.entities
      variable: entity
      body:
      - print:
        - "Found entity: %s (Wikipedia: %s)"
        - entity.name
        - entity.metadata{"wikipedia_url"}

EOT
4. Setup google-cloud-python directory

To generate this code sample, you need to regenerate the client library using synthtool.

Any programming language which supports sample generation may be used.

In this example, we'll use Python client libraries which are hosted in the
google-cloud-python repository.

Locally, checkout the Google Cloud Python client library for Natural Language:

$ git clone https://github.com/googleapis/google-cloud-python.git
$ cd google-cloud-python/
$ cd language/
5. Configure synth.py for sample generation

Update synth.py for the client library to generate samples by adding include_samples=True.

This Pull Request already includes these changes to synth.py for Natural Language.

Edit synth.py:

# ...
for version in versions:
    library = gapic.py_library(
        "language",
        version,
        config_path=f"/google/cloud/language/artman_language_{version}.yaml",
        artman_output_name=f"language-{version}",
        include_protos=True,
        include_samples=True
    )
# ...
6. Generate new code sample with synthtool

Run SynthTool to generate client libraries (using your googleapis/ directory as the source for generation).

First, you must set SYNTHTOOL_GOOGLEAPIS to the local path of your googleapis
directory, which tells synthtool where to find the sources to use when generating the
client library. (.proto files and config files, including sample configs)

$ export SYNTHTOOL_GOOGLEAPIS=/path/to/googleapis

Now you're ready to run synth to generate your new code sample!

$ python3 -m synthtool
7. View newly generated Python code sample

New Python code sample should have been generated: samples/v1/my_entity_sample.py

# ...
from google.cloud import language_v1
from google.cloud.language_v1 import enums


def sample_analyze_entities():
    """Demonstrate Analyzing Entities!"""

    client = language_v1.LanguageServiceClient()

    content = "California is a state."
    type_ = enums.Document.Type.PLAIN_TEXT
    document = {"content": content, "type": type_}

    response = client.analyze_entities(document)
    print(u"Got a response from the API!")
    for entity in response.entities:
        print(
            u"Found entity: {} (Wikipedia: {})".format(
                entity.name, entity.metadata["wikipedia_url"]
            )
        )
# ...
8. Run the generated Python code sample

Run the code sample:

$ python samples/v1/my_entity_sample.py

Code samples run against the live API which requires authentication.
Make sure you have GOOGLE_APPLICATION_CREDENTIALS set for authentication.

If it worked OK you should see output:

Got a response from the API!
Found entity: California (Wikipedia: https://en.wikipedia.org/wiki/California)
9. Add test configuration for new sample in googleapis

Next, browse to the directory in googleapis where the Natural Language files are located again.
This time, create a new sample test configuration file: v1/samples/test/my_entity_sample.test.yaml

$ cd $SYNTHTOOL_GOOGLEAPIS/google/cloud/language/
$ cat <<EOT >> v1/samples/test/my_entity_sample.test.yaml

type: test/samples
schema_version: 1
test:
  suites:
  - name: "My Test Group"
    cases:
    - name: "Test my_entity_sample"
      spec:
      - call:
          sample: my_entity_sample
      - assert_contains:
        - literal: "Got a response from the API!"
        - literal: "Found entity: California"
        - literal: "Wikipedia: https://en.wikipedia.org/wiki/California"

EOT
10. Run the sample test using sample-tester

Time to run the test!

First, run synthtool again to pickup the new test file.

$ cd google-cloud-python/language/
$ python3 -m synthtool

There should be a new file: samples/v1/test/my_entity_sample.test.yaml. Run it using sample-tester:

$ sample-tester samples/v1/test/my_entity_sample.test.yaml

If all went well, you should have a passing test 🎉

RUNNING: Test environment: ""
  RUNNING: Test suite: "My Test Group"
    PASSED: Test case: "Test my_entity_sample"

Tests passed

You can alternatively simply run $ sample-tester and it will run all tests:

$ sample-tester

# or directories
$ sample-tester samples/V1/

# or test name match patterns
$ sample-tester --cases entity

If you run sample-tester with --verbosity detailed, you can see the actual Python
commands that sample-tester executes to test each code sample:

$ sample-tester -v detailed samples/v1/test/my_entity_sample.test.yaml

RUNNING: Test environment: ""
  RUNNING: Test suite: "My Test Group"
    PASSED: Test case: "Test my_entity_sample"
      | 
      | ### Test case SETUP
      | 
      | ### Test case TEST
      | 
      | # Calling: python3 ./v1/my_entity_sample.py 
      | Got a response from the API!
      | Found entity: California (Wikipedia: https://en.wikipedia.org/wiki/California)
      | 
      | ### Test case TEARDOWN
      | 

Tests passed

Rebecca Taylor added 5 commits August 29, 2019 16:14
 – Use `include_samples=True` when calling gapic.py_library()
 – Copy the entire generated `samples/` directory from genfiles
 – Use `include_samples=True` when calling gapic.py_library()
 – Copy the entire generated `samples/` directory from genfiles
@googlebot googlebot added the cla: yes This human has signed the Contributor License Agreement. label Aug 30, 2019
Copy link
Contributor

@tseaver tseaver left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm pretty reluctant to merge in the samples without also adding in the support in language/noxfile.py / speech/noxfile.py to automate testing them. Given that sampe-tester is pip-installable, the stanzas would look something like:

@nox.session(python=["2.7", "3.7"])
def samples(session):
    """Run the samples."""
    # Sanity check: Only run tests if the environment variable is set.
    if not os.environ.get("GOOGLE_APPLICATION_CREDENTIALS", ""):
        session.skip("Credentials must be set via environment variable")

    samples_path = "samples"
    if not os.path.exists(samples_path):
        session.skip("Samples not found.")

    session.install("pyyaml")
    session.install("samples-tester")
    for local_dep in LOCAL_DEPS:
        session.install("-e", local_dep)
    session.install("-e", ".")

    session.run("sample-tester", samples_path, *session.posargs)

Then, one could test the samples locally by setting the environment variable and running:

$ nox -s samples

@tseaver tseaver changed the title 🤖 Generated code samples for Natural Language and Speech-to-Text [🐍] Language, Speech: Add enerated code samples. Sep 3, 2019
@beccasaurus
Copy link
Contributor Author

Thanks @tseaver!

I made the change locally and couldn't get the session to run with nox -s samples.
Is that the correct syntax / could you help?

$ nox -s samples
nox > Error while collecting sessions.
nox > Sessions not found: samples

Here are my 💻 shell commands and output (with version numbers and whatnot)

I pushed up a new commit with the change. [b7960f0e40]

@tseaver tseaver changed the title Language, Speech: Add enerated code samples. Language, Speech: Add generated code samples. Sep 4, 2019
@tseaver
Copy link
Contributor

tseaver commented Sep 4, 2019

@beccasaurus I think you need to run nox in the context of speech or language. Either:

$ cd speech
$ nox -s spamples

or

$ nox -r speech/noxfile.py -s samples

@tseaver tseaver added api: language Issues related to the Cloud Natural Language API API. api: speech Issues related to the Speech-to-Text API. kokoro:force-run Add this label to force Kokoro to re-run the tests. testing type: docs Improvement to the documentation for an API. labels Sep 4, 2019
- Only run these using one Python version by default (use 3.7)
- Fix package name to install for sample-tester
@yoshi-kokoro yoshi-kokoro removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Sep 4, 2019
@beccasaurus
Copy link
Contributor Author

Thanks! Obviously a nox newb here 🙌

Works perfectly, thanks! 🙏

@nox.session(python=["3.7"])
def samples(session):
    """Run the samples test suite."""

Because the samples are tested against the real API, we usually won't want the samples session to run against multiple versions of Python. This might take quite some time for some of our libraries' samples. So I configured it to use 3.7. I'm presuming that this is just a default and it doesn't prohibit anyone from running against other versions? If there's a better way to configure the default version versus available versions, let me know!

@beccasaurus
Copy link
Contributor Author

beccasaurus commented Sep 4, 2019

Looking at the label history, I'm not sure if this needs a kokoro:force-run applied again?

The tests all passed except 1 Language lint error (since fixed)

This is ready to run the tests again!

Since my last push, I've been watching and it hasn't changed from this state in awhile:
Kokoro - API Core Expected — Waiting for status to be reported

@tseaver tseaver added the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Sep 4, 2019
@tseaver
Copy link
Contributor

tseaver commented Sep 4, 2019

@beccasaurus I've applied the label again. Kokoro appears to be on a work stoppage today, though. :)

@yoshi-kokoro yoshi-kokoro removed kokoro:force-run Add this label to force Kokoro to re-run the tests. labels Sep 4, 2019
@beccasaurus
Copy link
Contributor Author

Green is my favorite color.

@beccasaurus
Copy link
Contributor Author

w00t ~ tagging @busunkim96 now that it's green

@tseaver
Copy link
Contributor

tseaver commented Sep 4, 2019

W00t, I can see:

* samples-3.7: success

in the jobs for both speech and language.

@tseaver tseaver merged commit 18e0f16 into googleapis:master Sep 4, 2019
@busunkim96
Copy link
Contributor

Looks great! We can add the samples session to the noxfile template in synthtool when more APIs are ready. :)

emar-kar pushed a commit to MaxxleLLC/google-cloud-python that referenced this pull request Sep 11, 2019
emar-kar pushed a commit to MaxxleLLC/google-cloud-python that referenced this pull request Sep 18, 2019
atulep pushed a commit that referenced this pull request Apr 3, 2023
atulep pushed a commit that referenced this pull request Apr 6, 2023
atulep pushed a commit that referenced this pull request Apr 18, 2023
parthea pushed a commit that referenced this pull request Jul 6, 2023
parthea pushed a commit that referenced this pull request Oct 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api: language Issues related to the Cloud Natural Language API API. api: speech Issues related to the Speech-to-Text API. cla: yes This human has signed the Contributor License Agreement. testing type: docs Improvement to the documentation for an API.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants