Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrate GPT Whisper so users can speak and hear without typing or directly using the screen reader #336

Open
ellvix opened this issue Jan 4, 2024 · 2 comments
Assignees
Labels
enhancement New feature or request future

Comments

@ellvix
Copy link
Collaborator

ellvix commented Jan 4, 2024

No description provided.

@ellvix ellvix self-assigned this Jan 4, 2024
@ellvix
Copy link
Collaborator Author

ellvix commented Jan 31, 2024

Top priority is speaking input. Output we'll just use the current ARIA, later (open a new ticket) do proper genAI TTS

@ellvix ellvix added enhancement New feature or request future and removed high priority labels Feb 7, 2024
@ellvix
Copy link
Collaborator Author

ellvix commented Feb 7, 2024

Moved to a future project, as the core of Whisper is Python. We'll need a server running the actual speech to text processing which we would then send to the LLM as a prompt.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request future
Projects
None yet
Development

No branches or pull requests

1 participant