Skip to content

Latest commit

 

History

History
52 lines (26 loc) · 9.44 KB

thoughts-chatgpt.md

File metadata and controls

52 lines (26 loc) · 9.44 KB

Thoughts about ChatGPT, Bard, etc.

Preamble

I'll be headlining a webinar organized by @WatSPEEDUW next week on chatGPT, Bard, and its ilk. There's still time to register! https://t.co/cUV9qg5QVv

— Jimmy Lin (@lintool) February 8, 2023

As part of preparing for this event, I've gathered some opinions and predictions that I'm willing to express "on the record". I'll be sharing my thoughts in subsequent tweets. Let's see how they age.

— Jimmy Lin (@lintool) February 8, 2023

On Prompt Engineering

My (contrarian?) take: prompt engineering is programming in natural language. We've tried this before, with attempts dating back decades. Recent advances do not change the fact that natural languages are ambiguous, imprecise, under-specified, highly contextual, etc.

— Jimmy Lin (@lintool) February 8, 2023

What's going to happen is that prompts will become increasingly stylized, with atoms (tokens) acquiring precise semantics, a rigid but semantically precise means of combination, and a similarly constrained means of abstraction. In other words...

— Jimmy Lin (@lintool) February 8, 2023

Prompts will become yet another programming language. This is inevitable for anything other than "casual" use.

— Jimmy Lin (@lintool) February 8, 2023

Yes, the prompt programming language will likely be closer to natural language, easier to learn, etc. but it will remain a technical skill that's non-trivial to learn.

— Jimmy Lin (@lintool) February 8, 2023

An analogy would be legalese, which *is* natural language, but has acquired many rigid structures that have precise meanings and are often incomprehensible to non-experts (and you go to law school to become proficient).

— Jimmy Lin (@lintool) February 8, 2023

This perspective does not preclude the Copilot-like scenario where the programmer interactively builds applications with a LLM, starting from a legalese-like prompt programming language that generates code, which the human then refines. That's the future I envision.

— Jimmy Lin (@lintool) February 8, 2023

And in fact, Knuth articulated this vision nearly four decades ago... it's called Literate Programming https://t.co/eZ3yRB4U1J and technology has sufficiently caught up that we may finally be able to realize it.

— Jimmy Lin (@lintool) February 8, 2023

On Hallucinations and Toxic Content

My (contrarian?) predictions on ChatGPT, Bard, and its ilk: Regarding the two biggest problems today, (1) hallucinations and (2) toxicity, the first will be transient (i.e., solved relatively shortly) and the second will be perpetual (i.e., will never be solved). Rationale:

— Jimmy Lin (@lintool) February 8, 2023

(1) the hallucination problem is technical and thus solvable (already many promising directions, e.g., retrieval augmentation, attribution techniques, etc.). Pretty soon we'll be able to accurately probe model output to trace back exactly where each token came from.

— Jimmy Lin (@lintool) February 8, 2023

LLMs will soon be able to synthesize content from source material (whether pretrained data or retrieval augmentation) with perfect fidelity. This does not address the problem that untruthful content exists (e.g., misinformation) - but the situation will be no worse than today.

— Jimmy Lin (@lintool) February 8, 2023

It will always be possible to coax untruthful content out of LLMs (for humans, we call this writing fiction), but in "normal" (non-adversarial) use, hallucinations won't pose a problem.

— Jimmy Lin (@lintool) February 8, 2023

(2) the toxicity problem will never be solved in an enlightened liberal society, because it's not a technical problem, but rather a matter of human nature.

— Jimmy Lin (@lintool) February 8, 2023

The InstructGPT paper asks the poignant question of "who are we aligning to?" The answer is, of course, the annotators. We can make LLMs increasingly aligned with the values that most people hold (e.g., racism is bad, child porn is bad), but it is impossible to do better...

— Jimmy Lin (@lintool) February 8, 2023

For the simple reason that *humans* generate content (i.e., express opinions) that other humans find toxic, and we tolerate this as a society (within bounds)... it's called free speech.

— Jimmy Lin (@lintool) February 8, 2023

LLMs will be increasingly consonant with shared social norms, and, pretty soon, in "normal" (non-adversarial) use, toxic content won't pose a problem.

— Jimmy Lin (@lintool) February 8, 2023

However, there exist plenty of opinions (on any controversial topic) that are within the bounds of socially acceptable discourse, but some may find objectionable. It'll be impossible to prevent LLMs from generating content that at least some find distasteful.

— Jimmy Lin (@lintool) February 8, 2023

DAN (Do Anything Now) and other "jailbreaks" are equivalent to recording your racist uncle ranting after a few drinks, but your racist uncle is (most likely) a functioning member of society and observes social norms in most situations. It'll be like that for LLMs.

— Jimmy Lin (@lintool) February 8, 2023

One major difference, though: humans are accountable for their words, i.e., "freedom of speech, not freedom from consequences". But it is impossible for LLMs to be accountable for their output.

— Jimmy Lin (@lintool) February 8, 2023

Instead, we'll take the organizations (or individuals) behind the LLM to task - exactly as what happens today. For example, a company expresses a distasteful opinion, activists organize boycotts.

— Jimmy Lin (@lintool) February 8, 2023

tl;dr - the disruption from ChatGPT, Bard, LLMs will be mostly transitory, and we'll reach a (different) equilibrium relatively soon.

— Jimmy Lin (@lintool) February 8, 2023