Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AI Help is linked on all pages #9214

Closed
3 tasks done
nyeogmi opened this issue Jul 1, 2023 · 7 comments
Closed
3 tasks done

AI Help is linked on all pages #9214

nyeogmi opened this issue Jul 1, 2023 · 7 comments

Comments

@nyeogmi
Copy link

nyeogmi commented Jul 1, 2023

Summary

(Relevant to a previous issue submitted by @eevee: here)

The AI Explain button, which generates incorrect explanations of technical examples in MDN, was temporarily removed earlier today by @fiji-flo .

In the linked thread, it was discovered that in many cases, the system summarizes code incorrectly or generates inaccurate information about browser features. The AI Help button, which invokes the same model with even less context, still exists.

The feature is paywalled so I can't test it, but both buttons invoke the same model. Since AI Help is not even being given known-correct documentation or example code, it is likely AI Help is as incorrect as AI Explain or more.

URL

The button is visible here: https://developer.mozilla.org/en-US/

The button itself goes to this page: https://developer.mozilla.org/en-US/plus/ai-help

Reproduction steps

To see the page itself:

  1. Visit the page in question.
  2. Witness upsell.

To see a link to the page:

  1. Visit any page on MDN.
  2. Look at title bar.

Expected behavior

There should not be a button linking to this feature.

Actual behavior

There is a button linking to this feature.

Device

Desktop

Browser

Chrome

Browser version

Stable

Operating system

Windows

Screenshot

image

Anything else?

The previous issue is here.

The AI Explain feature was disabled in this commit.

Validations

@github-actions github-actions bot added the needs triage Triage needed by staff and/or partners. Automatically applied when an issue is opened. label Jul 1, 2023
@nleanba
Copy link

nleanba commented Jul 1, 2023

  1. This “feature” is not fully hidden behind a paywall, by creating an account you can experience its unfounded confidence for free up to five (5) times a day!
  2. Yes, it does produce the same factual errors as the explain “feature” did: e.g.
    image
  • The first part isn’t a “mathematical expression”. Even if it did add the numbers, i.e., without the [ ] it would be -99990013000.
  • As such doesn't produce -100000000000 but it generates the string "10000000-1000-4000-8000-100000000000"
  • The second part is wrong on multiple subtle accounts:
    • even if it were correct on the first part, Number.prototype.replaceAll doesn’t exist
    • and if it were somehow cast to a string, the minus sign would still be present
    • but even then (or if it had gotten the first part correct) the final result would be a TypeError as String.prototype.replaceAll must be called with a global Regex

@zadeviggers
Copy link

For those who just want it gone, here's an adblock rule to block the banner and navbar button:

! 2023-07-01 https://developer.mozilla.org
developer.mozilla.org##li.top-level-entry-container:has(a[href*="ai-help"])
developer.mozilla.org##div.top-banner:has(a[href*="ai-help"])

@nyeogmi
Copy link
Author

nyeogmi commented Jul 2, 2023

An hour ago LeoMcA added a block of text that says (paraphrased) "do not trust the AI model, it spits out incorrect information sometimes."

(I think it's super awesome that he did this on a Sunday. Not that it's my department, but if I were him I'd just call it "triaged" and not think about it until after the weekend.)

I understand that this is what most platforms with AI models do, but I think this is a bad idea. Paraphrasing comments made by Eevee in the previous thread, users are given this choice:

  • trust the model that lies, because they don't know enough to verify the information themselves
  • verify everything manually, making the model non-useful

This might be less true in a situation where a model's output exists to dig up search terms (meaning: it may be less true for AI Help than it was for AI Explain) but is still broadly true. In the past, this kind of explanatory text has not stopped users from relying on AI information that turned out to be false -- especially not when GPT3 is marketed as a super smart do-everything bot half the time and marketed as "probably wrong, dangerous, don't trust me" in small print on its own website.

A disclaimer like this is a lot like putting "not for human consumption" on gas station incense transparently intended to be used as a drug. Functionally it exists to assign blame to users when they use the feature as it was marketed.

Writing a disclaimer that accurately describes the failure modes of a LLM would mean writing something that would drive most users away from ever using the feature: it is irresponsible for that reason to provide the feature in the first place.

One other procedural note: this fix was only reviewed by one person, who was @fiji-flo -- the person who introduced the feature in the first place. Do you have a review process that includes any MDN core reviewers?

@meduzen
Copy link

meduzen commented Jul 2, 2023

it is irresponsible for that reason to provide the feature in the first place.

I do think the only decent option is to remove the feature as the last couple of months showed how unsafe it can be to rely on ML for coding. Alternatively, if it’s not possible for MDN (for whatever reason), it could be good to seek for honesty in the message, starting with the page title, menu items and page header:

An adaptation of MDN menu item for the AI Help feature, renamed to AI Very Unsage Help. It also shows an additional warning on the page, where the header is now “Get answers using generative AI based on MDN content. And take everything with the necessary step back.”: the second sentence is a new one.

@jarcane
Copy link

jarcane commented Jul 2, 2023

The whole point of an explanation is to try and help a person who doesn't understand something learn how it works.

If a user doesn't understand something, how exactly are they meant to evaluate if the explanation is accurate?

A disclaimer solves nothing, it's simply deflecting blame for mistakes onto the person least qualified to avoid them.

Sites like Stack Overflow or Reddit at least have the benefit of users who can upvote/downvote and add context or corrections to answers, but this doesn't have any means of verifying anything, and the nature of GPT means it would be useless in any case unless answers are cached because GPT is designed deliberately not to give the same answers twice.

So even if it could do what you are promising (which to be clear, it can't) ... it still couldn't be held accountable in any useful way, or avoid leading users down pointless deadends.

The feature should be removed immediately before any further harm can be done.

@nyeogmi
Copy link
Author

nyeogmi commented Jul 3, 2023

I do think the only decent option is to remove the feature as the last couple of months showed how unsafe it can be to rely on ML for coding. Alternatively, if it’s not possible for MDN (for whatever reason), it could be good to seek for honesty in the message, starting with the page title, menu items and page header

I like this and will propose the following copy:
image

@caugner
Copy link
Contributor

caugner commented Jul 3, 2023

Hi there, 👋

@nyeogmi Thank you for taking the time to share your concerns, and rest assured that we continuously monitor user feedback. As @nleanba already pointed out, you can test the AI Help beta feature by signing up for a free MDN Plus account, and I encourage you to do so. If you encounter reasonable web development questions that should be answerable with MDN content, but you don't get a helpful answer, please do share these examples with us (so that we can refine the feature, or close some content gaps if necessary). For now, it is absolutely the intended behavior to have the AI Help links on all MDN pages, so I'll go ahead and close this as wontfix. (Note that AI Help and AI Explain are separate features that behave differently, and only AI Explain was disabled.)

@nleanba Thank you for sharing the example. And good job, you seem to have found an edge-case where no relevant MDN content was found, and an answer was given anyways. That's odd, and the expected behavior is definitely to instead show a message that AI Help cannot answer your question. **Would you mind filing an mdn/rumba issue for this? As for the question itself, I feel like it's very sophisticated and not a typical question that we'd expect AI Help users to ask on a daily basis, but do you know which MDN pages a beginner would need to read in order to answer your question about the JavaScript expression? (Maybe it could answer it correctly if you refine the question, or maybe not and we might have some content gaps?!) 🙏

@caugner caugner closed this as not planned Won't fix, can't repro, duplicate, stale Jul 3, 2023
@caugner caugner changed the title MDN can _still_ automatically lie to people seeking technical information #9208 AI Help is linked on all pages Jul 3, 2023
@mdn mdn locked as too heated and limited conversation to collaborators Jul 3, 2023
@caugner caugner added closed: invalid and removed needs triage Triage needed by staff and/or partners. Automatically applied when an issue is opened. labels Oct 2, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants