Willow users can now self-host the Willow Inference Server for lightning-fast language inference tasks with Willow and other applications (even WebRTC) including STT, TTS, LLM, and more!
Many users across various forums, social media, etc are starting to receive their hardware! I have enabled Github discussions to centralize these great conversations - stop by, introduce yourself, and let us know how things are going with Willow! Between Github discussions and issues we can all work together to make sure our early adopters have the best experience possible!
Visit official documentation on heywillow.io.