Skip to content

ChomskyTool

jal edited this page Apr 8, 2024 · 10 revisions

The below uses the transcript of the interview from Noam CHOMSKY on AI, ChatGPT, Universal Grammar and Wittgenstein: Unlocking Language & Mind. It's a very good interview and can probably be used as a module in its entirety (its not available at this time on youtube).

he following means to sharpen it into a few certain points and tools.

Chomsky On The Limitations of Computational Significance

Defeating a grandmaster in chess is a triviality. That was a PR campaign. Perfectly obvious back in the 1950’s. If you bring a dozen grand masters, have them sit for 10 days working out every possible program, every possible likely move, and feed it into a huge memory, then you will be able to defeat a grandmaster that has 45 minutes to think about the next move. Very exciting its PR for IBM, it has no further significance.

The current systems, they’re not doing the things you described. They are systems that are scanning astronomically amounts of data with billions of parameters and supercomputers that are able to put together from the materials they’ve scanned something that looks more or less like the kind of thing that a person might produce.

It's essentially high tech plagiarism.

A Metaphorical Example of Whether LLMs Can Think Etc.

It’s like asking whether submarines swim. Do you wanna call that swimming? Okay submarines swim. What the programs are doing is scanning huge amounts of data, finding statistical irregularities. Enough so that you can make a fair guess as to what the next word will be in some sequence

Is that thinking? Do submarines swim?

It's the same question.

A Good Basic Syntax Set For Language Theory Inquiry

First of all let's disentangle the terminology, machines don’t do anything.

I have a computer in front of me. It's basically a paperweight. It doesn't do anything. What the computer in front of me is capable of doing is implementing a program. That's it.

What's a program? Well, a program is a theory, written in a notation in which machines can implement. It is a strange kind of theory. The kind that you don't find in the sciences. For a program to function EVERY question has to be answered you can’t have unanswered questions. It's not like the sciences-there are many unanswered questions even in physics.

You could arbitrarily give an answer where you don’t know one. Ok that would be like a program.

The question is whether this strange kind of theory can be a theory of intelligence of consciousness and so on. Why not? We could have theories of consciousness. These approaches aren’t getting anywhere near it, but, it is possible for it, it's certainly imaginable to be a scientific theory of human intelligence.

We know quite a bit about that already. Lots of unanswered questions but progress.

Maybe you could say something about a consciousness. If you can, you could program it if you answered the unanswered questions. You could run it on a computer. There is nothing magic about this.

On The Problem of Language and Experimental Neural-Science

Chomsky suggests that the inquiry into the nature of language reduces to the problem of inquiry of the nature of consciousness where it is necessary for experiment exclusive to humans:

But another problem is you can’t do experiments with humans for ethical reasons. We can’t raise human children in artificial environments. You can’t put electrodes into single cells in the cortex to figure out what's going on.

We know a lot about human vision but that's because of invasive experiments with other animals. Which have about the same visual system as humans do.

You can’t do that with language and consciousness because there aren’t any other organisms. So it's a very hard problem. And that problem is not advanced IN THE LEAST by complex simulations. They tell us nothing.

On The Limitations of Introspection as a Tool For Understanding Internal Language and Consciousness

Virtually none of what's going on, in our use of language internally, is available for introspection. We have to study it the way we study other systems of the body. So for example we have a second nervous system, called the enteric nervous system, gut, brain, and sometimes ….. Huge nervous system, billions of neurons, has many of the same properties as the nervous system that's up here (points to head).

It's the system that keeps our body functioning

You can’t introspect into it. The only time we know anything about it is if you have a stomach ache-there’s something wrong with it.

Well when you take a look at language, thought, reasoning, reflection and so on…we have no idea what's going on…from introspection we know almost nothing.

Chomsky explains that what is commonly thought of as 'inner speech', taking to oneself, is really categorized as a type of 'external' speech:

There IS something called inner speech, you talk to yourself, that's actually EXTERNAL speech, it has the properties of EXTERNALIZED language, you just aren’t using the articulatory apparatus. But it's not what's going on internally.

Rather, what goes on INTERNALLY is not observable by introspection of the process:

We have good evidence of the normal scientific kind of what's happening internally, but you can’t introspect into it any more than you can introspect into how your enteric nervous system is functioning. That shouldn’t surprise us very much.

Thus:

What reaches consciousness, awareness, is fragments of whatever's going on in our minds

But the actual processes are beyond awareness. You are going to have to study them the way you study any other topic in science. You can’t introspect into how your visual system is converting saccadic eye motion which gives dots on the retina successive thoughts on the retina. You can’t introspect into how that's turning into my seeing a person. You have to study that from the outside from what philosophers sometimes call the 3rd person point of view. Same as language, it's not going to be any different. You should have no illusions about that.

Language and thought have to be studied like any other topics in science. You are not going to get very far by introspection-mostly misleading.

On The Moon Delusion and The Limitations of Explanation Via Simulation

Take the familiar case, say the moon delusion. You look at the moon on the horizon; it's much bigger than when it's high in the sky. Nobody understands that there is no successful theory about it. Nevertheless every successful scientist assumes that the moon hasn’t changed size. Ever thought you don’t have an explanation. So you dismiss the data because you don’t understand it and you try to find a theory.

The point is the data does not provide explanations on its sleeve-the data is not evidence. Evidence is a relational concept-evidence FOR something. Data is just data. You don’t know what it is until you have some theoretical framework in which you can interpret it.

And the same is true of all these questions. Just looking at the data, astronomical amounts of data, like current AI, then you can simulate things but simulation is not explanation.

On the Apparent Existence of Dark Matter And the Quantum Mechanical Nature of a Particle

Chomsky's notes that all science face uncertain's and unanswered questions:

Where is 90%-95% of what constitutes the universe? Where is it? Physicists can’t find it. They know from theoretical reasons that mass energy in the universe has gotta be there otherwise the theories don’t work. But you can’t find it.

Well physics doesn’t go out of business for that reason because they can’ find 95% of what's there.

What's a particle? You ask a dozen quantum physicists what’s a particle, and they’ll say “We are not really sure, could be this or could be that”. There are lots of unanswered questions in the sciences.

He calls attention to observation that some reason we don't assume the same structure for the inquire into the nature of the consciousness:

It's a strange belief among humans that we should have answers in the domain of mental life of the kind that we don’t even find in the most advanced sciences.

He re-iterates the difficulty and limitations in experiments studies language and the the consciousness since only humans are perceived to have these things to the significant extent we do:

And in the case of human mental life it’s multiply hard because you CANNOT do the experiments. You can think of lots of experiments that could give you answers to these questions. But you can't do them. And since humans are alone, no comparable organisms, you can’t do the experiments on other organisms and draw conclusions as you can with the visual system.

That's the situation we're in when we try to investigate ‘what's the nature of human thought, human reflection, human language’.

On The Inter-relation Of Thought and Language

There is a tradition that goes back to classical Greece, classical India, the main figures in the early scientific revolution, Galileo Descartes, others tradition that holds that language and thought are closely interrelated-intimately interrelated. Maybe the same thing language generates thought-thought is what is generated by language.

If so, when you study language you are studying our most fundamental properties. And there are many things about it that we know nothing about. How do I decide to produce this sentence instead of talking about the weather? We can say something about it, but it's not an explanation. The great response is nobody knows. It's a question that we have no idea about like many other questions.

On The Language Of Thought

Whatever language you speak yields linguistic expressions which are the formulation of thoughts. It’s very possible that all languages are identical or virtually identical in these systems. I don't know for certain but that’s the way research is tending. If that's the case, then what the internal language yields is a language of thought. Is there ANOTHER language of thought? No, you need some argument for that. Why isn't THIS the language of thought?

On Universal Religion

No point having an opinion about it since nothing is known about the general structure of religious belief-if there is such a thing. If somebody can come along with an account, an explanatory account, of the nature of the fundamental properties that enter into religious belief in all humans then we’ll be able to talk about it. Until that time we can’t.

Chomsky On The Definition Universal Grammar

Incidentally the idea that there is a Universal Grammar that is common to humans that leads to the capacity to acquire language, that's not MY belief, it's YOUR belief, it's EVERYBODY'S belief who thinks about it.

If you didn’t have some kind of innate structure…an infant would just hear a lot of noise the way a monkey does or a chimpanzee. You put a monkey, a chimpanzee and an infant in exactly the same environment, the infant instantly at birth, probably before birth, is picking out of the noise, language related elements and pursuing a determined course of development and growth which yields basically full knowledge of the essentials of language by 3 or 4. The chimpanzee is just hearing noise.

Either that's magic or there is some innate capacity in the human infant. Since we don’t believe in magic we assume there is an innate capacity in the human being. There is a name for the theory of that, whatever it is, we don’t know, we try to learn what it is, but the theory of it is called Universal Grammar.

There is empirical evidence that whatever this is its shared among humans so if you raise an infant from Papua New Guinea tribe that hasn't had human contact for 20,000 years, raise it in cambridge massachusetts it will go to MIT and become a quantum physicists and conversely we don’t know the main distinction. So we don’t know everything but there is good reason to believe it’s common human capacity.

We then try to investigate to find out which properties there are. There has been a far amount of progress in that-plenty unknown.

On Arithmetic as a Primitive Component of Language

Well on that there is quite interesting work.

With regard to arithmetic we now have some plausible answers, not established but plausible, to questions that greatly troubled Charles Darwin and Alfred Russell Wallace the two founders of the theory of evolution, they were very much concerned with what they regarded as a serious paradox. They assumed, they didn’t have the evidence, but they assumed apparently correctly that all humans have arithmetical capacity. All humans, maybe the capacity has to be brought out by triggering stimulation but that's normal for instinctive behavior, but all humans basically know that the natural numbers go on forever, that addition works this way and so on.

They were troubled by that because it obviously couldn’t be developed by natural selection since the capacity has never been used until recently in evolution and then by small numbers of people.

Darwin and Wallace disagreed. Wallace thought there must be some other factor in evolution and Darwin disagreed he thought there must be some way to do it but they left with a paradox.

But we now have a possible answer. If you look at current contemporary theories about the nature of Universal Grammar it turns out if you take those assumptions and you simplify them to the limit you imagine a language that has only one element in it, one word if you like, and it uses the simplest possible forms, you get something like arithmetic, could be the answer, could be the reason why it's there, might be either an offshoot of language or the same evolutionary step that yielded language yielded this general property.

On Universal Music

With regard to music the question was really opened for discussion by, about 50 years ago, by Leonard Burnstein and his Charles Eliot Norton lectures at Harvard on language and music which raised some interesting questions about commonality of structure. Since then there has been quite a bit of research and interesting ideas about common properties of certain musical genres particularly tonal music and western classical tradition and linguistic structure could be that they again kind of like arithmetic have the same roots. There is interesting work on this .

There is even, John Michael, a philosopher now teaches at georgetown law school wrote a very interesting thesis on these topics about 30 years ago I guess where he also discussed how systems of morality might have common features which again relate to the structures that we discover in the nature of language and so on.

It’s basic generative structure.

On The Possibility of A Shared Nature of Internal Language

It raises questions about whether, there is something very fundamental in human cognition which yields all of these consequences. Language may be, as I said before, it may turn out internal, the internal language, the one that's functioning inside that we can’t gain any consciousness of could be that it's pretty well shared among people.

Whereby now there’s reasonably good evidence, not total, but reasonably good evidence, that the variety and the apparent complexity of language lies mostly in very peripheral aspects of language, namely the way that the internal system is translated into, mapped into, some sensory of system usually speech, could be sign, could be touch.

On The Linear Nature of External Speech Versus The Apparent Abstract Nature of Internal Speech

These systems of externalization are not really part of language, and they differ from the internal language in fundamental respects. So for example EXTERNAL speech has words appearing in LINEAR ORDER, one word after another, there is very good reason to believe that the INTERNAL system, the system we use in thinking and reasoning and so on, has no linear order, it just deals with abstract structures. Not order. It's a fundamental difference.

A lot of languages differ in how they USE this property of linear order of structure. There is quite a lot of variety. In fact it was believed until pretty recently that are languages where there is totally free of order. Some Australian and indigenous languages were thought to have completely free word order as distinct from English which has a fairly rigid word order.

Well deeper study has shown that that's actually not true, they do have free external order but internally you look at the structure of these languages and the way that thoughts are interpreted and so on they seem to have the seem internal properties of any language-it’s like English. Now that's how research proceeds.

The same is true in biology, you go back 50 or 60 years, it's believed widely among biologists that organisms vary almost without limit. And each organism has to basically be studied on its own. Now it's known that that’s not true, that there are deep homologies that hold billions of years. Form the same, yield the same, basic structure for organisms with superficial variation.

That's the progress of science, and it's taken place in the study of mind as well to some extent.

The Relationship Between Language And The World And The Word And the Object

The relationship between language and the world is what is called semantics in ethical terminology and the terminology of Greg Barsky coined that. Semantics is the relation between elements of language and things in the world.

That's why you have books like Quine’s Word and Object that's the relation between the word and an object. Well it turns out that its very likely that there is no such relationship for natural language.

There is no relation of what's called reference . A word referring to a thing. Now intuitively it seems like that-but when you think about it it just doesn’t work. What you find is people USE words to refer to things but that's an action.

What John Austin called a speech act. There is an act of referring but that doesn't follow that there is a relation of reference.

And when you look closely it turns out there are no fixed relations between words or minimum meaning for any elements of language and entities in the outside world. There is a much more INDRECT relationship between them.

Actually this was known by Aristotle. This was by him in fact. Basically his observations I think were correct and we now have to expand them in many ways.

The relation between language and the world is one that's not well understood.

We have ways of referring to the world, talking about the world, by many of the properties but much of it is cloaked in mystery.

On Language And Time

If you take a look at wharf’s main example, Hopi the language he was studying didn’t have tense systems past, present and future. Instead the speakers of what he called, standard average European (like English) have a, we view time, as a kind of a line, with ourselves standing on it looking forward in one direction, over our shoulder in the other direction.

He said that’s a reflection of the tense system that we have. Hopi doesn't have that tense system, so they, he argued, see time relativistically.

The problem with that, we found this out 70 years ago, English doesn’t have a tense system.

Nevertheless you look at English has past and non- past but nothing else. We don't have future. Future is a modality. We’ll, may, must, the structure of English doesn't have a tense system.

Nevertheless, we see time as a line with ourselves standing on it. Which means that the structure of the language is telling us nothing about our perception of time. If that's true for us then why isn’t it true for the Hopi. It's probably universally true whatever the language is.

That's the kind of proposal that was made and fell apart on analysis, not very deep analysis in this case. So far there is virtually no substantial evidence for it.

On Cultural Versus Natural Influences In Language

I suspect it would be pretty hard to devise a culture where red means go and green means stop. Because it's probably something about our innate conception of the way colors relate to action. But again we’re entering into unknown terrain here. We have to investigate this. Here I should say that I think Wittgenstein and others who’ve followed this have been very seriously mislead.

So Wittengstein raised a very famous question about a line with, an arrowhead pointing, he said ‘Why do we follow the arrow in ‘this’ direction and not any other direction’?”. An arrowhead is just one end of the line, and he argued it's just social convection, but it surely isn’t.

If you did experimental work you would find that probably dogs would interpret it that way, and probably infants would, cause it's just part of their built in nature to interpret geometric structures as entailing some kind of thing with regard to action.

It hasn’t been studied, but it undoubtedly would show that result. The idea that these are some kind of social conventions I think is VERY unlikely. A good deal of philosophical argument falls apart when you think about these things.

The philosophers have been strangely unwilling to consider the possibility these have innate built in structures. I think that's due to the residual impact of the empiricists tradition which is just mistaken

And in fact even ____ recognized this.

He pointed out that, as he put it, the experimental reasoning itself is based on animal instinct, instinctive residues of some kind, it has to be otherwise you get nowhere.

On Writing As External Form Of Language

Writing follows EXTERNALIZED language not what's internal. It couldn’t, nobody knows what it is. So writing systems, not only very few languages, but very recently in human history, small parts, even a language that has writing, most of the population didn’t know anything about it.

So it's a narrow element of pretty modern human history, Sumerial, Egypt. It follows the structure of externalized language in various ways. There are different writing systems , of course, but in one or another way they reflect the properties of the externalized language. They try to mimic it in some formation.

On the Conundrum of the Apparent Complexity And Simplicity Of The Evolution of Our Language Faculty

The earliest proposals were about 75 years ago. There has been a kind of conundrum, a problem, that is been faced over these years and it's becoming sharper and clearer. Whatever Universal Grammar is, it must be rich enough to account for the gap between the data available to a child, which is very scanty, and the rich knowledge that’s obtained by 3 or 4 years old. That’s basically Universal Grammar that bridges that gap, the innate structure, same for every other kind of group.

It just looks on the surface that it has to be very rich. And on the other hand if you look at the evolutionary record, of which not much was known until quite recently, but now we have some knowledge of, there is very strong evidence that language emerged pretty suddenly and along with modern humans. There is no evidence for symbolic activity of any kind.

And shortly after that humans began to disperse, we know that from genomic evidence. They all seem to have the same language faculty. So it seems it was all in place before they disappeared. Well shortly after the dispersal you get pretty rich symbolic activity, almost modern.

All of this seems to indicate that whatever the fundamental core Universal Grammar is it appeared pretty much along with modern humans. So it should be quite simple. A conundrum. How could it be very simple and very complex? How could it account for the diversity of languages?

On The Usefulness And Relevance Of The Theory Of Computational Efficiency In Understanding The Nature and Origin of Language And Our Language Faculty

Well the work on Universal Grammar which is the same as linguistic theory, the same topic, over the years has moved to simply and sharpen fundamental principles. To the point where maybe we can now see how the conundrum is resolved with fundamental very simple principles for Universal Grammar complemented by rich reliance on principles of computational efficiency.

These are not part of language, they are just natural laws very simply, it is the way the world works sometimes with computational efficiency. These are elements and explanation of language which were not understood or thought of until recently. When you bring these in, concepts of computational efficiency, simple elementary elements of composition and relation to semantic interpretation that are separate parts of language, we CAN explain a fair amount not everything by any means but that's the direction in which in my view things are moving.

On the Highly Evolved And Evolutionary Stability of Our Language Faculty

There is no evidence the language faculties change. And there seems to be no difference of the cognitive faculties among the varieties of existing humans. So it looks as if it's been fixed and it hasn’t changed. Languages of course change, they change from generation to generation. My grandchildren speak somewhat different than I do

So superficially languages changes quite rapidly. But that seems to be the external language not the internal language one as far as we know. Thats where research is tending.

On Possible Extensions To Studies and Theories of Morality

Here, I can mention again the work of john michail that I mentioned before, open the study of the basis of how our moral reasoning, or practical wisdom if you like, might have fundamental properties of generation and construction that do relate in interesting ways to the basic cognitive processes that underlying language, arithmetic, maybe other mental faculties, these are areas to be on the border of inquiry and investigation

Chomsky Corollary 2

Our derivation from Chomsky's points here is that IF we have AI it's likely or plausible etc that we experimented on human's to get it (although we will present another way through the special study of certain historical persons).

The Chomskian Framation of Language and The Mapability to Cryptography

Chomsky describes a firmware we all share-an inner process not introspectable etc. Although there are many copies of this firmware this would be as if it is a private key of a private/public key pair. This firmware (private key) pairs with evolutions/branches of language (public keys) that we assume are in and of themselves complex to the point of the private key being not 'feasibly' backwards derivable (feasibly hardly needs to be defined for this mapation).

Home

Home

Ideal Money Versions by John Nash

Global Games and “Globalization” by John Nash

The Nashian Orientation of Bitcoin

Ideal Poker

Bip

Nashian Orientation vs. Drivechains

nashLinter chatGPT Agent

nashLinterGPT Demo

Linter Knowledge

The following is written to be read in descending order and also doubles as the modules for our nashLinterAgent:

  1. Bitcoin Most Certainly Violates Mises Regression Theorem and This Fact Compels Clarification or Re‐Solution from the Mises Institute; And An Introduction to Szabonian Deconstruction
  2. Of The Fatal Inconsistencies In Saifedean Ammous' Bitcoin Standard
  3. On Terminating Bitcoin's Violation of Mises Regression Theorem With Games as Pre‐Market Commodity Valuators
  4. On the Szabonian Deconstruction of Money and Gresham's Law
  5. The Bitcoin Community is a Sybil Attack On Bitcoin
  6. On The Satoshi Complex
  7. On Cantillon and the Szabonian Deconstruction of the Cantillon Effect
  8. Understanding Hayek Via Our Szabonian Deconstruction of Cantillon
  9. On the Tools and Metaphors Necessary To Properly Traverse Hayek’s Denationalization of Money In the Face and Light of Bitcoin
  10. On the Sharpening of the Tools Necessary As a Computational Shortcut for Understanding Hayek’s Proposal The Denationalization of Money in The Context of the Existence of Bitcoin
  11. Our Tool for Szabonian Deconstruction of Highly Evolved Religions
  12. Thought Systems As Inputs For Turing Machines‐Our Tool For Framing Metaphors Of Intersubjective Truths
  13. On the Szabonian Metaphorical Framework For Objectively Traversing the Complex History of Mankind
  14. On the Synthesis and Formalization of Hayek, Nash, And Szabo’s Proposals For The Optimization of The Existing Global Legacy Currency Systems
  15. On The Re‐Solution of Central Banking and Hayekian Landscapes

Extra (these aren't added to the demo yet)


ChatGTP rheomodeLinguistAgent

rheomodeLinguist GTPAgent Demo

Bohmian Rheomode Modules


Rheomode Construction Examples


Quantum Curiosity (the Schrodinger's Cat) LLM Agent Modules


Nash Cooperation




Protocols etc.

Chomsky

Nash Program Upgrade

The Chomsky Primitive and It's Relevance and Significance To Bitcoin

Bohm

Other

Clone this wiki locally