Improve model availability and enable external/custom model sources #680
Replies: 2 comments
-
|
IMO it all depends on the goals of Foundry Local. From the consumer or developer POV, I agree with you. But from the enterprise POV I view things differently. For an Enterprise I can see governance and compliance of models as a top concern. You can view a model as a form of application, and many enterprises restrict what can applications can be deployed to an enterprise Windows device. Right now many are blocking the typical model repos (like Foundrylocal.ai) because they want to control the models that get deployed. Windows Update is how the EPs are deployed, and enterprises are limiting what can be deployed. So I can see Foundry Local as a way organizations can limit what models can be deployed. |
Beta Was this translation helpful? Give feedback.
-
|
Thanks for the feedback! The following FAQ should address this: https://github.com/microsoft/Foundry-Local#why-doesnt-foundry-local-support-every-available-model. The primary aim of Foundry Local is to allow developers to ship AI features that can exploit on-device capabilities. With this mind, we carefully select and optimize models. A good example of what we do is articulated in this article for real-time on-device streaming: https://arxiv.org/pdf/2604.14493 Here did an extensive test of model candidates for real-time audio streaming and then optimized the model so that it was 4X smaller and 4X faster without loss of quality (WER was still 8%). You'll see in the candidate selection that a lot of thought goes into "can this run on a hardware constrained device with quality". |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Foundry Local is a very interesting project with strong potential for local AI workflows. 👏
However, one major limitation today is the model availability and update cadence, which feels quite restrictive. The current model catalog evolves slowly and limits experimentation when compared to the broader AI ecosystem.
Problem
I believe this makes it harder to adopt Foundry Local as a truly flexible local AI platform.
Beta Was this translation helpful? Give feedback.
All reactions