Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix streaming test #140

Merged
merged 3 commits into from
Apr 13, 2023
Merged

fix streaming test #140

merged 3 commits into from
Apr 13, 2023

Conversation

radames
Copy link
Contributor

@radames radames commented Apr 13, 2023

Not sure if this is due to the non-deterministic of text generation, however the space before the " one" keep making the test fail. So I've added the index to add the space WDYT @vvmnnnkv

Array(5) [
  0: Object {
  token: Object {id: 80, text: "one", logprob: -0.75390625, special: false}
  generated_text: null
  details: null
}
  1: Object {
  token: Object {id: 192, text: " two", logprob: -0.01940918, special: false}
  generated_text: null
  details: null
}
  2: Object {
  token: Object {id: 386, text: " three", logprob: -0.015197754, special: false}
  generated_text: null
  details: null
}
  3: Object {
  token: Object {id: 662, text: " four", logprob: -0.01940918, special: false}
  generated_text: null
  details: null
}
  4: Object {
  token: Object {id: 1, text: "</s>", logprob: -0.030883789, special: true}
  generated_text: "one two three four"
  details: null
}
]

@radames radames requested a review from coyotte508 April 13, 2023 01:39
@radames radames mentioned this pull request Apr 13, 2023
Copy link
Member

@coyotte508 coyotte508 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems the generated response changed @radames

we can ignore spaces imo in the token, eg text: expect.stringContaining(tokenText) to be more robust

@vvmnnnkv
Copy link
Collaborator

It might be due to update of text-generation-inference to v0.5.0, noticed it has changes related to token decoding in huggingface/text-generation-inference#144.

@@ -68,7 +68,7 @@
"method": "POST"
},
"response": {
"body": "[{\"generated_text\":\"The answer to the universe is this: it is all of history.\\n\\nIt's time to reclaim our right to see it as we see it. It's time to take back the right to think and to make our opinions heard in the process\"}]",
"body": "[{\"generated_text\":\"The answer to the universe is the fundamental property of space, but we're dealing only with the simplest possible set of answers, which is why we need to talk about how we come to know and understand what sets matter.\\n\\nLet's take a\"}]",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's funny that tapes.json would update with random outputs each time :)

@coyotte508 coyotte508 merged commit 02b4bb3 into main Apr 13, 2023
@coyotte508 coyotte508 deleted the fix-streaming-test branch April 13, 2023 20:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants