Skip to content

Hallucination-proof way for an LLM to populate data in UI within agent libraries #1246

@jacobsimionato

Description

@jacobsimionato

Imagine a use case like:

  • An agent analyses 20 available air conditioners, considering price, appearance, performance etc
  • The agent wants to recommend 3 top options to the user, explaining why the are a good fit for the user's home and specific needs (very quiet, blending into decor)
  • The agent creates a custom UI showing the three models with name, image, price pulled from a database, and some additional analysis e.g. decibel noise rating pulled from documentation and an AI-generated image of the unit integrated into the user's home

Problem: How can we prevent this system from hallucinating the price of the unit and thus misleading the user? This could be a risk because the agent may have access to tools, reviews, documentation etc listing an older sale price for the unit which is no-longer correct.

There are different architectures that are possible which have a different balance of flexibility and reliability.

Some directions:

Opaque Components

Define an opaque Component like ProductCard(productId). The agent specifies an ID, and the name, cost, and product image is pulled directly from a database and populated.

The advantages here is that the ProductCard is protected from being misleading, because it must always show the title, cost, and product image fetched correctly from the database. The downside is that the agent does not have the ability to customise the component to show db rating and an AI-generated image. It could wrap the component to do this, but that might look visually awkward.

This pattern is already possible if you implement the data fetching on the client side where the component is implemented.

Possible project: Add a concept of server-side "Composite Components" which are implemented on top of underlying "Primitive Components" that the client supports. The LLM would specify ProductCard, then some server side logic would translate it into multiple more granular components and inject real data in the process.

Agent-controlled data mapping

Give the agent full granular control over the layout, but require that data values are pulled from some real data source. This ensures that data values that the user sees cannot be hallucinated as they are from some real source, however there is still a risk that the AI maps the values in a misleading way, e.g. mixing up the cost of product A and product B, or presenting a weight in kg as lb in the UI.

Possible project: Introduce a "secure data model" inference pattern in the A2UI agent libraries where the data model is populated directly from tool calls rather than being LLM generated.

  1. Agent calls tools

  2. JSON from tools is added to the data model automatically, at automatically generated keys like "productSearch1" or "dysonReviews".

  3. UI generation agent is given data model and asked to generate only the component tree. Potentially, it could be prevented from using literal values in the component tree, to ensure that all data is pulled from some source. Alternatively, when the agent specifies a literal value, the UI could add a badge like "AI generated". This way, the user might see a product table where name and price have no badge, but the AI-generated pros and cons columns have the badge.

Metadata

Metadata

Assignees

Type

No type

Projects

Status

Todo

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions