Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MDN can now automatically lie to people seeking technical information #9208

Open
3 tasks done
eevee opened this issue Jun 30, 2023 · 115 comments
Open
3 tasks done

MDN can now automatically lie to people seeking technical information #9208

eevee opened this issue Jun 30, 2023 · 115 comments
Labels
🧑‍🤝‍🧑 community contributions by our wonderful community

Comments

@eevee
Copy link

eevee commented Jun 30, 2023

Summary

MDN's new "ai explain" button on code blocks generates human-like text that may be correct by happenstance, or may contain convincing falsehoods. this is a strange decision for a technical reference.

URL

https://developer.mozilla.org/en-US/docs/Web/CSS/grid

Reproduction steps

as soon as i heard about this, i visited the first MDN article in my address bar history (for the grid property), hit "ai explain" on the first code block encountered (the syntax summary), and received the following information:

grid: "a" 100px "b" 1fr;: This value sets the grid template to have two rows and two columns. The first row has a height of 100 pixels and the second row has a height of 1 fraction unit (1fr). The columns are named "a" and "b".

which is deeply but subtly incorrect — this creates only one column (more would require a slash), and the quoted strings are names of areas, not columns. but it's believable, and it's interwoven with explanations of other property values that are correct. this is especially bad since grid is a complex property with a complex shorthand syntax — exactly the sort of thing someone might want to hit an "explain" button on.

the generated text appears to be unreviewed, unreliable, unaccountable, and even unable to be corrected. at least if the text were baked into a repository, it could be subject to human oversight and pull requests, but as best i can tell it's just in a cache somewhere? it seems like this feature was conceived, developed, and deployed without even considering that an LLM might generate convincing gibberish, even though that's precisely what they're designed to do.

and far from disclaiming that the responses might be confidently wrong, you have called it a "trusted companion". i don't understand this.

Expected behavior

i would like MDN to contain correct information

Actual behavior

MDN has generated a convincing-sounding lie and there is no apparent process for correcting it

Device

Desktop

Browser

Firefox

Browser version

Stable

Operating system

Linux

Screenshot

No response

Anything else?

No response

Validations

@github-actions github-actions bot added the needs triage Triage needed by staff and/or partners. Automatically applied when an issue is opened. label Jun 30, 2023
@MrLightningBolt
Copy link

Confirming. This "AI" snake oil is worse than useless for the reasons described above; other examples are trivial to create. It makes MDN worse just being there.

@catleeball
Copy link

catleeball commented Jun 30, 2023

Generated code (without any human vetting for correctness and human curation for relevance) is a hazard since it can produce plausible-sounding disinformation.

I strongly feel that the AI help feature is likely to cause much more damage than it would possibly help.


Edit: To clarify, I think the best path forward is to offer only documentation written by humans, ideally reviewed by people who have domain expertise. E.g., pay technical writers and keep documentation available for community-suggested edits.

@MrPetovan
Copy link

You don't explain code blocks using plausibly-sounding plain text, you explain it visually by linking to a JsFiddle page with that code.

This is a deeply misled feature that will produce disinformation at scale, like all other LLM applications.

@Eragonfr
Copy link

You don't explain code blocks using plausibly-sounding plain text, you explain it visually by linking to a JsFiddle page with that code.

I don't agree that linking to a fiddle is enough, you need plain text explanation of what the code do. But the explanation need to be peer-reviewed and fact checked. Not being some plausible garbage generated by an AI.

@lifning
Copy link

lifning commented Jun 30, 2023

and it's hardly a isolated occurrence. this is pervasively impossible to trust.
the AI incorrectly claiming that a given code example isn't valid CSS

@mandynicole
Copy link

Releasing this alongside the long-awaited Play was also a choice. We should be empowering the community to create examples built on MDN itself.

@Xkeeper0
Copy link

As another example, https://developer.mozilla.org/en-US/docs/Web/HTML/Element/portal

From the page text (emphasis mine):

A <portal> is similar to an <iframe>. An <iframe> allows a separate browsing context to be embedded. However, the embedded content of a <portal> is more limited than that of an <iframe>. It cannot be interacted with, and therefore is not suitable for embedding widgets into a document. Instead, the <portal> acts as a preview of the content of another page. It can be navigated into therefore allowing for seamless transition to the embedded content.

The "AI Explain" feature:

<portal id="exampleportal" src="https://example.com/"></portal>

The code example you provided is an HTML code snippet that demonstrates the usage of the element.

The <portal> element is a new HTML element introduced in the HTML5.3 specification. It allows you to embed a remote document into your current document, creating a portal-like experience.
In the code example, there is a <portal> element with an id attribute set to "exampleportal" and a src attribute set to "https://example.com/". The id attribute is used to uniquely identify the portal element, while the src attribute specifies the URL of the remote document that will be displayed within the portal.
When the page is loaded, the browser will fetch the remote document specified by the src attribute and display it within the <portal> element. The remote document can be any valid HTML document, and it can contain interactive content such as forms, videos, or other web components.
The <portal> element provides a way to seamlessly integrate remote content into your web page, allowing users to interact with it as if it were part of the current document. This can be useful for scenarios such as displaying live data from another website or embedding a third-party widget into your page.
It's important to note that the <portal> element is still an experimental feature and may not be supported by all browsers. Therefore, it's recommended to check the browser compatibility before using it in production.

In this case, the AI explanation directly contradicts the page itself!

@datarocks
Copy link

As someone who occasionally dips his toes in CSS when no one else is available and a thing needs to be fixed, I depend on these docs to be dependable. This is even more true today than it was in the past, as LLM generated chum spreads across the web. Please keep MDN as a trusted source of human generated developer documentation.

@patrickhlauke
Copy link

This is both sad and deeply embarassing for the MDN team...

@ericwbailey
Copy link
Member

ericwbailey commented Jun 30, 2023

The inclusion of this feature represents a great deal of risk in folks' ability to learn code effectively, especially code where the output explanation has the potential to perpetuate bias and misunderstanding from the content the LLM model is sourced from.

I would also like to note the ethical and environmental concerns surrounding how LLMs are constructed. It saddens me to see this feature as a former MDN editor.

@avdi
Copy link

avdi commented Jun 30, 2023

I didn't spend a decade trying to convince people to use MDN over the shovelfuls of low-quality SEO-farming craptext on W3Schools, only for them to be presented with shovefuls of low-quality AI craptext on MDN

@alensiljak
Copy link

The next generation of AI will be trained on this. Just sayin'...

@Nyumat
Copy link

Nyumat commented Jun 30, 2023

Considering the fact that MDN's "AI Help" feature is a semi-paid service, this is a huge let down to both see and use.

This new feature claims to be powered by OpenAI's GPT 3.5, yet ChatGPT is purely a language model, not a knowledge model. Its job is to generate outputs that seem like they were written by a human, not be right about everything.

In the context of web development as a whole, we cannot count on LLM's to "facilitate our learning". I cannot understate how terrible and drastic this blow to customer trust is. ❌

MDN has been one of the leading resources for aspiring and current professional developers in the web world. This new beta "help" feature is taking away from the integrity and trustworthiness of a once fantastic site to learn from.

Thank you OP for opening this issue, MDN's team needs to be better.

@dwminer
Copy link

dwminer commented Jun 30, 2023

I use MDN because it's a comprehensive and accurate source of documentation with no fluff. I fail to see how LLM output prone to egregious inaccuracies improves that. It dramatically weakens my confidence in MDN and I fear that its inclusion will promote an over-reliance on cheap but unreliable text generation.

@brndnmtthws
Copy link

We've come full circle and we've learned nothing.

Clippy-letter

@aardrian
Copy link

Another example from the Accessibility concerns section of <s>: The Strikethrough element which offers this CSS:

s::before,
s::after {
  clip-path: inset(100%);
  clip: rect(1px, 1px, 1px, 1px);
  height: 1px;
  overflow: hidden;
  position: absolute;
  white-space: nowrap;
  width: 1px;
}

s::before {
  content: " [start of stricken text] ";
}

s::after {
  content: " [end of stricken text] ";
}

The AI wraps up its explanation with this:

Overall, this code creates a strikethrough effect by hiding the content of the "s" element and adding visible text before and after it.

That is demonstrably wrong. There is no demo of that code showing it in action. A developer who uses this code and expects the outcome the AI said to expect would be disappointed (at best).

That was from the very first page I hit that had an accessibility note. Which means I am wary of what genuine user-harming advice this tool will offer on more complex concepts than simple stricken text.

@ericwbailey
Copy link
Member

ericwbailey commented Jun 30, 2023

To @aardrian's point: Utilizing inaccessible code may have legal ramifications, to say nothing about the ethical problems of restricting others' access. What risk and responsibilities does the MDN incur if an organization incorporates inaccessible code suggestions and advice provided by this feature?

@fenndev
Copy link

fenndev commented Jun 30, 2023

As a person working towards becoming a web developer, I trust MDN to contain accurate, fact-checked information. For every minute this may save someone, it would surely cost hours of troubleshooting for another, especially newer developers who utilize MDN as a learning and reference tool extensively. This is damaging both to the developer community and the reputation of MDN as a trusted resource; while I might not have extensive experience as a web developer, I hope that a newbie perspective might also be helpful.

@fernandoacorreia
Copy link

Deciding to implement this feature implies a fundamental misunderstanding about what LLMs do. MDN users are looking for authoritative, correct information, not for plausible-looking autogenerated fiction. This puts the good judgment of MDN's team in question.

@AMDAndy
Copy link

AMDAndy commented Jun 30, 2023

I am warning my team about this feature and letting them know not to trust it.

@colin-p-hill
Copy link

This feature does not seem to be well-targeted at the problem it is meant to solve. Writing technical documentation is time-consuming and difficult, so wanting to automate is understandable – but the target audience are precisely those people who do not have the requisite knowledge to spot mistakes, so the "Was this answer useful?" feedback buttons don't seem likely to weed out bad explanations quickly or reliably enough to avoid problems.

There is already some work done on reasoning about where and how to automate tasks appropriately and effectively, and I recommend using it as a starting point for designing features like this. It may be more appropriate in this case, for example, to build a tool at Sheridan and Verplank's LOA 3 by using AI to generate text assets which are then reviewed and edited by a human expert before publication.

@PrivateGER
Copy link

Placing GPT-based generations on a website that used to be for accurate documentation is so incredibly off-brand that I find it just...confusing. Newbies will find this, they will use this, and they will be fed misinformation that they cannot reasonably be expected to discern.

There's nothing to really be gained by this feature, it just smells like chasing trends with no thoughts given to the actual downsides. Not even to mention the legal issues that stem from generations of code matching public licensed code, which remains an unsolved problem.

@krryan
Copy link

krryan commented Jun 30, 2023

It is beyond bizarre that I will now have to recommend people avoid MDN and use w3schools instead.

@aardrian
Copy link

This from the <mark> element page gets the same CSS concept wrong in a fun new way.

mark::before,
mark::after {
  clip-path: inset(100%);
  clip: rect(1px, 1px, 1px, 1px);
  height: 1px;
  overflow: hidden;
  position: absolute;
  white-space: nowrap;
  width: 1px;
}

mark::before {
  content: " [highlight start] ";
}

mark::after {
  content: " [highlight end] ";
}

From the fake-AI:

Overall, this code example creates a highlight effect by using pseudo-elements to add invisible elements before and after the content of the <mark> element. These invisible elements are positioned absolutely and have a small size, effectively hiding them from view. The content property is used to add visible text before and after the <mark> element's content, creating the highlight effect.

The essentially same code from the <del> element page gets this explanation:

Overall, this code example creates a visual representation of a deleted text by hiding the content of the <del> element and adding " [deletion start] " before the hidden content and " [deletion end] " after the hidden content.

I will spare you the same advice for the same code on the <ins> page.

The point is, the LLM in use does not understand CSS. Nor accessibility.

@patrickhlauke
Copy link

i mean at this stage, should at the very least add a big fat "this explanation may actually be complete toss" warning in front of it. or, you know, reevaluate what the actual point of having this "feature" is if it's a crap-shoot whether it's useful or just a pile of hallucinated rubbish

@DavidJCobb
Copy link

DavidJCobb commented Jun 30, 2023

What is this feature even meant to offer? It's taking documentation and examples authored by thinking human beings who are capable of comprehending things, and bolting on clumsily-generated nonsense written by an uncomprehending automaton. That is: there's already an explanation; the entire page is an explanation; and if that explanation is insufficient, it should be edited by a thinking human being to improve it; sloppily bolting an AI-generated addendum onto it is not the right approach.

Even just looking at more of the code blocks on the article for grid: I clicked "AI Explain" on the HTML for one of the examples -- a code block with a #container element and several empty divs. Predictably, the LLM spat out three or four paragraphs of "middle schooler padding for word count"-tier dross about how the example "demonstrates how to add child elements," because the LLM couldn't comprehend the context of the code block. It couldn't and didn't properly explain the code block in the context of the grid property, the broader thing that that HTML was meant to demonstrate. "The HTML was setting up a single grid container, and a set of divs that would be rendered as colored blocks to visually illustrate the grid layout." If an explanation is actually necessary, that's a proper explanation.

Everything about this is blatantly, obviously wrong. What understanding of LLMs and of documentation as a concept could possibly lead to someone thinking this is a good idea?

@patrickhlauke
Copy link

What understanding of LLMs and of documentation as a concept could possibly lead to someone thinking this is a good idea?

the thinking of "actual human writers are expensive (if actually employed) ... we can save money through the power of AI"

@makyen
Copy link

makyen commented Jul 1, 2023

I view this "feature" as a fundamental betrayal of the core MDN mission. That this "feature" would make it past the concept stage to even begin implementation demonstrates either a total lack of understanding of how LLM machine learning works and what "genAI" is capable of and/or a total disregard of MDN's mission. Having either of those happen in the process from concept to implementation is a complete failure.

By implementing and deploying this "feature", MDN has convinced me to stop contributing to MDN and cease donating to the Mozilla Foundation, because I am completely unwilling to participate in perpetuating the massive disinformation which this "feature" presents to users and the dramatic confusion and waste of people's time which it will cause.

Obviously, I will also stop recommending MDN as a good source of documentation. I will also need to remove links to MDN from everything I've written which can be edited.

I am so very, very disappointed in Mozilla/MDN.

@acdha
Copy link

acdha commented Jul 1, 2023

This was very disappointing as a now-former MDN contributor and subscriber. The whole point of MDN was authoritative content but until there are some fundamental improvements in LLMs I might as well be supporting W3 Schools.

@nyeogmi

This comment was marked as outdated.

@Akselmo

This comment was marked as off-topic.

@Adamkaram

This comment was marked as off-topic.

@sideshowbarker sideshowbarker added 🧑‍🤝‍🧑 community contributions by our wonderful community dx developer experience and removed needs triage Triage needed by staff and/or partners. Automatically applied when an issue is opened. labels Jul 2, 2023
@Be-ing
Copy link

Be-ing commented Jul 2, 2023

So the "solution" is adding a disclaimer and a survey instead of removing the false information? 🙃 🙃 🙃 Every change to the MDN website so far has been doubling down while pretending to listen. It's starting to look like a fork may be warranted...

@MrLightningBolt
Copy link

So either someone is paying Mozilla to do this, or it's being recklessly championed by someone who cares more about getting on the AI hype train than on MDN being a reliable source of information, and in either case it means MDN is effectively compromised in such a way it can no longer be considered trustworthy.

@Zarthus
Copy link

Zarthus commented Jul 2, 2023

I would have liked to see more interaction from the authors (@LeoMcA, @fiji-flo) in this thread rather than just a merge request that was approved & merged within 12 minutes of inception.

It feels to me like @sideshowbarker actually understands and is just picking up/doing damage-control while other maintainers (still) are just doing their own thing without showing the accountability or fully understanding the problem.

It's totally okay to let a merge request linger for longer than 15 minutes and garner some feedback, especially if you have recently made a mistake and seek to remedy that when the firefighting is already done. Getting it out the door ASAP kind of just comes across untrustworthy or rushed to me.

To reiterate what was said 2 days ago by Sorixelle:

#9208 (comment)

Glad it's partially reverted for now, but... commits directly to main, without so much as a PR, or even a message in this issue we have for tracking this? Sounds like Mozilla has some serious accountability concerns to address.

I hope MDN writes a public post-mortem/blog once this is all more thoroughly discussed in depth by the internal team and have had time to retrospect.

@ghost
Copy link

ghost commented Jul 2, 2023

Horribly disappointed to see this patched over with a disclaimer. Are you kidding? "It may be inaccurate" means it should not exist. I do not think I can, in good conscience, support MDN or Mozilla at large if this is where things are going.

@MxSelfDestruct
Copy link

this is an entirely unnecessary antifeature that's going to mislead a lot of people for virtually no benefit. please remove.

@Sorixelle
Copy link

Mozilla, if you're trying to actively erode community trust in you as effective stewards of the MDN, you're doing a phenomenal job. All we're asking you to do is to take on board community feedback - I'd think the almost complete lack of voices in favour of this "feature" in this high-exposure issue is unanimous enough - and pull out this thing that no one wants. We don't wan't a disclaimer that "the information may not be correct" - we already know that. We don't want the misinformation in the first place.

Why is community feedback being ignored, and why does Mozilla consider this feature so important that it should be kept despite near unanimous community backlash?

@MrPetovan
Copy link

At this point, isn't cheaper to just disable the feature to avoid potentially costly LLM API requests? The absence of feedback collector form in the answers makes me think it isn't a plow by an LLM provider to get free digital work from MDN visitors pointing out the answers' inaccuracy.

@jibsaramnim
Copy link

I can't help but feel worried about this whole addition. For the beacon that Mozilla represents to me, this whole jumping on bandwagons thing really feels unnecessary. Especially if it doesn't even provide something consistently accurate and useful — to visitors of MDN, anyway.

For anyone donating to the Mozilla foundation, it might be worth reaching out to them via their contact form, to share your concerns as a donator (too). I don't know if it can help in any meaningful way, but my thinking is that they need feedback from the community to know what the community prefers, so any way to get that feedback could be helpful.

@WebReflection
Copy link

WebReflection commented Jul 3, 2023

@SpaceMageWhatever I guess you just underlined why this comment was relevant

If you came here to tell your experience with AI, declaring you don't even use MDN, I think you might be in the wrong place.

Thanks for sharing though, at first glance, nobody would've complained if this service would've produced desired/expected results ... the time is not right to trash MDN own content and contradict it via its own service with stuff that is not accurate and also might not work at all.

Happy to hear that's not the case for everyone, but MDN is considered trustworthy, ChatGPT isn't up yet to serve that task out of the box.

@Qix-
Copy link

Qix- commented Jul 3, 2023

@SpaceMageWhatever MDN is a reference, not a tutorials/guides/beginner site. Two subtly different goals in terms of technical documentation. It's not there to cater to beginners, it's there to serve as a reference that is slightly more human-friendly than the W3 specifications.

@RonaldRuckus
Copy link

RonaldRuckus commented Jul 3, 2023

How delusional is Mozilla to actually thinking GPT can explain new documentation? Unless they decide to implement their own knowledge graph and maintain the shit out of it, it is bound to produce misinformation.

It's not a feature. It's a bug. ChatGPT, and any sort of LLM inherently will return to previous/outdated information that it's been trained on. Misleading and confusing any new learners.

@resuna
Copy link

resuna commented Jul 3, 2023

am i the only one who thinks this is kinda neat?

I hope so.

Unless they decide to implement their own knowledge graph and maintain the shit out of it, it is bound to produce misinformation.

Even if they do, it's bound to produce misinformation. It produces likely sequences of words, regardless of whether they're causally related.

@RonaldRuckus
Copy link

RonaldRuckus commented Jul 3, 2023

"Please verify the information as it may not be accurate"

Verify it how, Mozilla? Perhaps through, I don't know, a REFERENCE? That assumingly a person didn't fully comprehend and needed a trustworthy source to elaborate?

@caugner
Copy link
Contributor

caugner commented Jul 3, 2023

Hi there, 👋

Thank you all for taking the time to provide feedback about our AI features, AI Explain and AI Help, and to participate in this discussion, which has probably been the most active one in some time. Congratulations to be a part of it! 👏

There's a lot to read and unpack, and I know y'all are waiting for some sort of official statement. We're not there yet, but rest assured it's in the making, and we should have something to share before the end of the week.

For now, let me point out (like others have before) that AI Explain and AI Help are separate features that work differently (see this blog post for details on how AI Help works), and so while AI Explain was disabled on Saturday, AI Help continues to be available to MDN Plus users (5 questions per 24h for free users, unlimited for paying subscribers).

We have also received some valuable feedback about AI Help, some of which we have already reacted to by adding a disclaimer, and we will continue to act upon feedback.

If you happen to encounter AI Help answers of low quality to reasonable questions, please do report the specific examples (text + screenshot) to us so that we can look into it. 🙏

In the meantime, I will go ahead and lock this issue to collaborators to reduce noise, but please stay tuned. Your feedback has been heard and we'll make sure to share important updates in this issue as well.

@mdn mdn locked as too heated and limited conversation to collaborators Jul 3, 2023
@caugner caugner changed the title MDN can now automatically lie to people seeking technical information AI Explain sometimes provides incorrect explanations Jul 3, 2023
@sideshowbarker
Copy link
Collaborator

sideshowbarker commented Jul 3, 2023

To the 1287+ people who upvoted this issue before it was locked, and to the dozens of you that took time to comment on it before it was locked: Further discussion with Mozilla about this will be taking place this week, and I can promise you I’ll make sure your voices get represented loud and and clear. And I’ll post any further updates of my own here after more discussion with Mozilla has happened.

@fiji-flo fiji-flo changed the title AI Explain sometimes provides incorrect explanations MDN can now automatically lie to people seeking technical information Jul 3, 2023
@sideshowbarker
Copy link
Collaborator

#9230 is a related issue that was raised a couple hours ago.

@couci
Copy link
Collaborator

couci commented Jul 7, 2023

Hello all,

In the spirit of open conversation, we are inviting you to attend the MDN community call on Wednesday, 12th of July at 4:30 pm UTC. We plan to discuss the recent releases of AI Help and AI Explain, future plans and go through some of the feedback received so far. If you'd like to ask us any specific questions, you can pre-submit them here (by clicking the “New discussion” button on the top right).

Important: The call will be live streamed in AirMo but not recorded. We will also monitor the Discord and matrix rooms for live questions from people that can’t attend the Zoom Room. All the questions and answers (pre-submitted and live) will be noted and shared afterward for people who can’t attend.

When: Wednesday 12th of July, 4:30pm UTC
Where: Zoom room and AirMo live stream
Questions: Pre-submit (click the “New discussion” button) and upvote questions in the Community calls category

@caugner caugner removed the dx developer experience label Jul 7, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
🧑‍🤝‍🧑 community contributions by our wonderful community
Projects
None yet
Development

Successfully merging a pull request may close this issue.