-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Choose model #8
Comments
I guess you meant developer here, not user, right? |
Users of the API, i.e. developers.
|
I assume this could be problematic as it would create a fingerprinting vector, compromising user privacy. Additionally, this approach might lack forward compatibility, as models are likely to evolve and change over time. A more robust solution could be to expose metadata about each model, such as context window size, number of parameters, supported languages, and relevant capabilities (translation, etc.). This way, developers can make informed decisions based on the features and performance characteristics they need without directly exposing model IDs. |
@christianliebel How would exposing model ID be more problematic in terms of fingerprinting when there is already user-agent name and version available, as well as textModelInfo, which easily deduces which LLM is built-in (Google Chrome -> Gemini Nano). const modelInfo = await ai.textModelInfo('gemini-nano'); // {id: 'gemini-nano', version: '1.0', defaultTemperature: 0.8, defaultTopK: 3, maxTopK: 10} This would allow web developers to choose the best fitting local model depending on use-case (e.g. math, reasoning, poetry). Also there should be a way to add custom models as web extensions (#11). |
The composition of models (especially when you register custom ones) could be pretty unique, similar to fonts. |
@christianliebel It's the same level of uniqueness as when detecting which web extensions are installed like extension-detector. Even ad blockers can be detected. I think the possibility to choose among multiple local LLMs justifies slightly bigger fingerprinting surface. If you care about privacy, you just won't install any additional LLMs. |
I was going to create an issue about choosing versions but I see the suggestion that version be part of the textModelInfo. I imagine that developers may want to give the user the choice of proceeding with their currently downloaded model or to download a newer version. And it would be nice if somehow the user can make an informed decision regarding how big a download and how significant the upgrade is. |
I've proposed a breakout session to discuss some of the privacy tradeoffs for AI model management at W3C TPAC in two weeks, see w3c/tpac2024-breakouts#15. Indeed, the set of models available (if shared cross-site, and if they can be updated independent of the browser version) does create some fingerprinting risks. On the other hand, if they are tied to APIs that are updated and packaged along with the browser version, so that knowledge of what models are available can be predicted exactly from the browser version (which as has been pointed out above, is known) then that knowledge adds no new fingerprinting information. What I am interested in is the middle ground, with potential solutions like shared foundation models (distributed with the browser and tied to the browser version, so no new information) + same-origin-cached adapters (which can be small). But that is one of several different options with different tradeoffs, and there are bunch of missing bits in the specifications right now. |
There could be more than one LLM in a web browser (built-in or added as a web extension). Let's show users the list of available LLMs (using their IDs) and allow them to optionally choose a model when creating a session.
For example:
The text was updated successfully, but these errors were encountered: